venue
stringclasses 2
values | paper_content
stringlengths 7.54k
83.7k
| prompt
stringlengths 161
2.5k
| format
stringclasses 5
values | review
stringlengths 293
9.84k
|
---|---|---|---|---|
ICLR | Title
Theoretical and Empirical Study of Adversarial Examples
Abstract
Many techniques are developed to defend against adversarial examples at scale. So far, the most successful defenses generate adversarial examples during each training step and add them to the training data. Yet, this brings significant computational overhead. In this paper, we investigate defenses against adversarial attacks. First, we propose feature smoothing, a simple data augmentation method with little computational overhead. Essentially, feature smoothing trains a neural network on virtual training data as an interpolation of features from a pair of samples, with the new label remaining the same as the dominant data point. The intuition behind feature smoothing is to generate virtual data points as close as adversarial examples, and to avoid the computational burden of generating data during training. Our experiments on MNIST and CIFAR10 datasets explore different combinations of known regularization and data augmentation methods and show that feature smoothing with logit squeezing performs best for both adversarial and clean accuracy. Second, we propose an unified framework to understand the connections and differences among different efficient methods by analyzing the biases and variances of decision boundary. We show that under some symmetrical assumptions, label smoothing, logit squeezing, weight decay, mix up and feature smoothing all produce an unbiased estimation of the decision boundary with smaller estimated variance. All of those methods except weight decay are also stable when the assumptions no longer hold.
1 INTRODUCTION
Machine learning models are often vulnerable to adversarial examples, which are maliciously designed to cause misclassification. In the area of computer vision, for instance, object recognition classifiers are much more likely to incorrectly classify images that have been modified with small, often inpreceptible perturbations. Similar problems also occur in natural language processing area, see (Miyato et al., 2017), where small perturbations of text can easily fool a label classification model. It is therefore important to develop machine learning models that are resistant to adversarial examples in situations where attacker may attemp to interfere, for example with autonomous vehicles (Papernot et al., 2017). Understanding the design mechanisms of adversarial examples can also help researchers to gain a better understanding of the performance of machine learning, especially deep learning models. In this paper, we introduce an efficient feature smoothing method to improve the adversarial robustness of neural networks and also build a theoretical framework to understand how different approaches help with the adversarial accuracy.
Different adversarial training methods have been proposed to increase robustness by augmenting training data with adversarial examples. Goodfellow et al. (2015) developed the fast gradient signed method (FGSM), which efficiently generated adversarial example by a “single-step” attack based on a linearization of the model’s loss. Their trained model is robust to single-step perturbations but remains vulnerable to more costly “multi-step” attacks. Madry et al. (2017) extended FGSM by proposing a multi-step variant FGSM, which is essentially projected gradient descent(PGD). They suggested that adversarial training with the PGD attack is a universal first order adversary defense, which means that models trained against PGD attacks are also resistant against many other first order attacks. Their PGD attacks consists of initializing the search for an adversarial examples at a random point within the allowed norm ball, then running several iterations of the basic iterative method to find an adversarial examples. Kannan et al. (2018) then introduced a logit pairing method (ALP)
which encourages the logits for pairs of examples and their corresponding adversarial examples to be similar. Logit pairing improves accuracy on adversarial examples over trainings based on PGD.
The above successful approaches performed data augmentation by generating adversarial examples during each training step, which will unfortunately bring significant computational burden to the training process. In contrast, more “efficient” training methods without hindering the training speed have also been shown to improve adversarial robustness (In this paper we refer “efficient” methods as data augmentation and regularization methods including mixup, label smoothing, logit squeezing, weight decay, and our proposed feature smoothing). Szegedy et al. (2016) proposed label smoothing, which trains a classifier using soft targets for the cross-entropy loss rather than the hard targets. The correct class is given a target probability of 1− α and the remaining α probability mass is divided uniformly among incorrect classes. Label smoothing reduces overfitting by preventing a network from assigning full probability to each training data, and also offers a small amount of robustness to adversarial examples (Kannan et al., 2018). Kannan et al. (2018) proposed a logit squeezing method which penalizes the logit of each input example. It is showed that combined with adding Gaussian noise into input examples, logit squeezing gave even better results than ALP in some datasets, for example MNIST and SHNV. Zhang et al. (2018) performed data augmentation by training the model on virtual input points as interpolation of two random examples from the training set and their labels, resulting in increasing both the robustness of adversarial examples and the accuracy in clean test data.
In parallel, many theorems have also been proposed to understand the power and existence of adversarial examples. Transferability is shown to be a common property of adversarial examples. Szegedy et al. (2014) and Papernot et al. (2016) found that adversarial examples generated based on a specific neural network can fool both the same neural network trained with different datasets and different neural networks trained with the same dataset. The existence of adversarial examples is still an open question. Possible reasons have been suggested in recent papers, such as low density (Szegedy et al., 2014; Pei et al., 2017), decision boundary too close to the training data (Tanay & Griffin, 2016). However, there are few papers theoretically explaining the similarities and differences between those methods, especially based on their estimation of decision boundaries. Goodfellow et al. (2015) discussed the differences between weight decay and adversarial training by comparing their loss functions in logistic regression, but didn’t show how these two methods affect the estimation and accuracy.
The above discussion leaves us two questions:
• Without adding any computational burden during training, these “efficient” methods mainly benefit from data augmentation and regularization, and as a result, resist against adversarial examples to some extent. As most of them are not specifically designed for resisting against adversarial examples, can we develop an “efficient” approach specifically designed to be robust to adversarial examples? • What are the connections and differences among these “efficient” methods? Can we build a
unified framework to analyze them?
Motivated by these two questions, we investigate defenses against adversarial attacks, and the contribution is two-fold. We first propose feature smoothing, a data interpolation method that softens the features of input. We show that feature smoothing obtains better performance than other “efficient” approaches on both MNIST and CIFAR10. We also observe the best performance when combining our feature smoothing method and logit squeezing strategy, among all “efficient” methods. We also propose a unified framework to understand how different “efficient” approaches influence the estimation of decision boundary. In particular, based on both simulations and theoretical analysis of logistic regression, we show that under some symmetrical assumption, label smoothing, logit squeezing, weight decay, mixup, feature smoothing and data extrapolation all give an unbiased estimation of boundary with smaller estimation variance. But regularization with weight decay is more sensitive when the assumption may not hold. We believe it is the reason weight decay can hurt the accuracy in clean test data. Our framework are also partially extended to deep convolutional neural networks.
The paper is organized as follows. Section 2 presents our proposed method and other related “efficient” methods. Section 3 reports the performance of feature smoothing against other “efficient” methods. We conduct theoretical analysis and explore the connections and differences among different methods in Section 4. The last section concludes.
2 METHOD
Following the idea of adversarial training, we propose feature smoothing method which also adds new data into the training set to improve the robustness. Other than generating adversarial examples based on current model, feature smoothing mimics adversarial examples by data interpolation and adding Gaussian noise directly based on the original training data. We will introduce feature smoothing and discuss several related methods in the following.
2.1 FEATURE SMOOTHING
In a classification problem, we aim to recover the unknown decision boundary based on the training data (Figure 1(a)). As long as the decision boundary is correctly estimated, there will be no adversarial examples. Tanay & Griffin (2016) suggested that neural networks which estimate decision boundary too close to the training data causes adversarial problems. The incorrect estimation of boundary may be caused by low density (Szegedy et al., 2014) of input data where adversarial examples exists. In adversarial training, the estimation is improved by adding adversarial examples into input (Figure 1(b)) during each step.
Based on this idea, if we are able to generate ‘low density’ data directly based on the original training set, we can also improve the estimation as what adversarial training does but with much smaller computational cost. We now introduce feature smoothing, a simple data augmentation approach which generates new virtual training data as interpolation of features from a pair of random samples. Virtual training data are constructed as follows:
x̃(i) = (1− α)x(i) + αx(j), ỹ(i) = y(i),
where (x(i), y(i)) and (x(j), y(j)) are two examples drawn randomly from our training data, and 0 ≤ α < 0.5. When x(i) and x(j) belong to different classes, and the interval between these two data points intercept with the decision boundary only once, x̃i is closer to the boundary than xi or xj . Figure 1(c) shows that adding new data interpolated between classes can help with the estimation of decision boundary.
Furthermore, Gaussian noise also helps extend the range of x. Figure 1(d) shows that adding Gaussian random noise with proper variance into input can also push the estimated boundary closer to the true boundary compared against original clean data. Hence we add Gaussian noise into our feature smoothing method as well:
x̃(i) = P (x(i) + ), ỹ(i) = y(i),
where ∼ Normal(0, σ2) and P (x) projects x to the range of original data. To distinguish data interpolation part and Gaussian noise part, we use ‘feature smoothing’ only referring to data interpolation and ‘noise’ for the Gaussian noise part in the following. A detailed illustration of how feature smoothing helps the estimation of boundary is discussed in Section 4.
2.2 RELATED METHODS
Though starting from different intuitions, feature smoothing turns to be very similar with mixup (Zhang et al., 2018). In mixup, additional virtual data points are generated by interpolating both features and labels of the original training data:
x̃ = (1− α)x(i) + αx(j), ỹ = (1− α)y(i) + αy(j), where α ∈ [0, 1]. Mixup can be understood as a form of data augmentation that encourages the model to behave linearly in-between training examples. Zhang et al. (2018) argued that this linear behavior reduces the amount of undesirable oscillations when predicting data outside the training examples. On the contrary, our feature smoothing method includes the interpolations with new label remaining the same as the dominant data point, which maintains the S-shaped curve of logistic model and also allow feature smoothing easier to be combined with regularization methods. More detailed comparison can be found in Sec. 4.
Label smoothing (LaS) and logit squezzing (LoS) are other two efficient ways which improve the adversarial accuracy. Let y ∈ RK be one-hot label for K classes, label smoothing (Szegedy et al., 2016) softens the target by replacing y with
ỹ = δ
K − 1 (1− y) + (1− δ)y,
where δ = 0.1 is shown to be the best choice (Pereyra et al., 2017). Assume we train a model with parameters θ on a batch of m data points {(x(i),y(i)), i = 1, 2, . . . ,m}, y(i) ∈ {0, 1}K . Let f(x;θ) denote the mapping function from x to logits of the model. Let L(clean) denote the cross entropy loss for the batch of data points as:
L(clean) = − 1 m m∑ i=1 K∑ j=1 y (i) j log(pθ(y (i) j |x (i))).
The loss function of label smoothing can also be achieved by some calculation:
LLaS = − 1
m m∑ i=1 K∑ j=1 ỹ (i) j log(pθ(y (i) j |x (i))) = L(clean) − 1 m m∑ i=1 K∑ j=1 1−Ky(i)j K − 1 δfj(x (i),θ).
Notice that if we assume our model obtains a good estimation of f(x, θ), then when yj = 0, fj(x, θ) < 0 and when y = 1, fj(x, θ) > 0. In a binary classification case, LLaS can be written as L(clean) + δ|f(x, θ)|, which further indicates that label smoothing predicts logits with smaller magnitude and therefore avoids overfitting.
Similarly, logit squeezing (Kannan et al., 2018) applies a L2 norm on the logits directly as a penalty of over-confidence:
LLoS = L clean +
λ
m m∑ i=1 ||f(x(i))||2,
where Lclean is the original loss of neural networks and f(x(i)) is the logit of image x(i) as above.
Weight decay is another well known regularizer which efficiently reduces overfitting of neural networks by adding L1 or L2 penalty of weight w,
Lwd = L (clean) + λ||w||22.
However, weight decay is shown to be not very helpful for adversarial examples compared to label smoothing and logit squeezing, which will be discussed in Sec 4.
Combination of different approaches In feature smoothing and mixup, we generate new data points as linear combination of xi and xj . For generating the exact y value of these virtual points, mixup uses a linear interpolation for estimation, while feature smoothing chooses the dominant label. Nevertheless, it is also possible that feature smoothing or mixup also adds mislabeled noises into the training data, especially when xi and xj are not symmetric to the boundary. In that situation, label smoothing and logit squeezing are better ways to avoid overfitting. So we also consider to combine these methods together to gain a better test and adversarial accuracy.
3 EXPERIMENTS
3.1 RESULTS ON MNIST
We experiment feature smoothing, label smoothing, mix up, logit squeezing and their possible combinations on MNIST, with the results summarized in Table 1. We find that combining feature smoothing and logit squeezing give the best performance in both clean test data and adversarial examples. For all experiments in this section, we train our models for 200 epochs and use Adam for our optimizer with a learning rate at 10−4. Random noises are added into the training data in several methods, with the same σ value of 0.5.
For MNIST, when α ranges between 0.2 and 0.4 we observe similar performance for feature smoothing, whereas for large α at 0.5, too much noise in data label brings underfitting for feature smoothing. We use a final α value of 0.3 for reporting results in Table 1. Chosen by cross validation, we use α ∈ Beta(8, 8) for mixup, and δ = 0.1 for label smoothing. In logit squeezing, we use the weight λ of 0.2 as experimented in Kannan et al. (2018). In feature smoothing and mixup, 10 new data points are generated on each batch with batch size m = 50.
We use the LeNet model as Madry et al. (2017) and also apply the same attack parameters as they provided. After scaling the range of images pixels into [0, 1] (divided by 255), we apply perturbation per step of 0.01, 40 total attack steps with 1 random start, and the total adversarial perturbation threshold set as 0.3. Similar with Madry et al. (2017), we also generate black box examples for MNIST by independently initializing and training a copy of the LeNet model, then generate PGD attack based on that model. Both cross entropy loss and correct-wrong loss are used.
Each single method improves a small amount of the adversarial accuracy, but combinations of them lead to a much better performance (Table 1). Logit squeezing combined with feature smoothing and Gaussian random noise achieves the best performance among all those “efficient” methods.
3.2 RESULTS ON CIFAR10
We follow Madry’s lab for the experiments in CIFAR10. For all experiments in this section, we train our models for 80000 global steps with batch size m = 128 in each step. We use Momentum at 0.9 for our optimizer with a learning rate at 0.1 for the begining, 0.01 after 40000 global steps and 0.004 after 60000 global steps. Weight decay with λ = 0.0002 is also applied to all experiments. We use α ∈ Beta(8, 8) for mixup, δ = 0.1 for label smoothing, λ = 0.1 for logit squeezing , α = 0.2 for feature smoothing, and σ = 0.5 for Gaussian random noise. In feature smoothing and mixup, 10 new data points are randomly generated on each batch with batch size m = 128.
We apply the ResNet model and the same attack parameters as they used. We use perturbation per step of 2.0, 20 total attack steps with 1 random start and the total adversarial perturbation threshold set as 8. The black box adversarial examples are also generated by independently initializing and training a same model. Logit squeezing combined with feature smoothing and Gaussian random noise still performs the best among all of the ‘efficient’ methods (Table 2).
4 THEORETICAL EXPLANATIONS
In this section, we show that the above “efficient” methods increase neural networks’ adversarial robustness by improving the estimation of the decision boundaries. The improvement relies on two components: (1) unbiased estimation of boundary; (2) smaller estimation of variance. Given the same training data, these methods estimate the boundary closer to the true boundary than the original neural networks. Our simulations and theoretical results mainly focus on logistic regression. The idea is then discussed with deep convolutional neural networks.
To gain some intuitions on how the above methods improve the estimation, we start from logistic regression model with binary classes. Assume a feature vector x follows some distribution Px in Rd, w ∈ Rd and b ∈ R, then the corresponding label y follows a Bernoulli distribution with probabilities given by:
p := P (y = 1) = 1
1 + e−(wx+b) , P (y = 0) =
1
1 + ewx+b . (1)
Based on the changing the loss function, we divide these methods into two categories: (1) regularization methods: label smoothing (LaS), logit squeezing (LoS), and weight decay (wd); (2) augmentation methods: mixup and feature smoothing. Regularization methods add penalty term to loss function directly, while augmentation methods modify the loss function by adding new virtual data into it. We analyze the properties of these methods based on the two categories in the following subsections and the proofs of the theorems are included in Appendix.
4.1 REGULARIZATION METHODS
Our main theorem shows that all of the regularization methods estimate the decision boundary with smaller variance and the estimation is unbiased when x is symmetric with the boundary. With one-dimensional x and binary classes, the variance of decision boundary can be defined as: var( bw ), w 6= 0. Let p̂ denote the estimated probability. The confidence interval of p̂, which indicates the confidence interval of boundary, is narrowed down with the regularization methods, especially when the support of the distribution of x is far away from boundary (Figure 2(a)). As the value of w increases, the corresponding variances for w, b and decision boundary are also better controlled with regularization methods than the vanilla logistic regression (Figure. 3). For the vanilla logistic regression, when w is large enough, the variance of boundary grows in an exponential rate with w. But with these regularization methods, variance keeps decreasing even when w is really large. This observation is also true with higher dimensions and multiple classes (Figure 5). Inspired by our observation in the simulation study, we prove the following theorems in one dimension (1-D) to further explain the phenomena in the simulations.
Theorem 4.1. Label smoothing, logit squeezing, and weight decay all estimate the decision boundary with smaller variance in logistic regression model in 1-D.
Theorem 4.2. When x is symmetric with respect to boundary, label smoothing, logit squeezing, and weight decay have unbiased estimation of boundary in logistic regression model in 1-D.
The symmetric assumption is not unrealistic for imaging classification problems, since we can always assume the true boundary is in the middle of two classes. However, the stability of those methods when this assumption cannot hold is also important. We also show that label smoothing and logit squeezing is relatively more stable than weight decay when x is asymmetric (Figure 7).
4.2 AUGMENTATION METHODS
Other than adding regularization to the loss function directly, adversarial training, mixup and feature smoothing all ‘improve’ the loss function by changing the distribution of x. Figure 4 shows how the distribution of x influences the estimation of the boundary in different cases. It is natural to see that when the data are pushed closer to the true boundary, the boundary estimation becomes better due to reduced variances. Following the same analysis above, x being around boundary leads to smaller estimated p(1− p), which yields smaller variance for w, b, and the boundary. When the symmetrical assumption is violated, more careful selection of original data points is needed to avoid adding too much noise into training set.
Following the above explanation, our theorem 4.3 shows that adding data around boundary with labels generated from the true distribution into training can narrow down the variance of boundary, even though the sample size remains the same. Adversarial training, mixup and feature smoothing estimated the labels in different ways. We further show that feature smoothing achieve smaller variance than mixup when α is properly chosen (sec A.4).
Theorem 4.3. Adding data around boundary narrow down the variance of boundary estimation by making the distribution of x closer to boundary. The estimation is unbiased if all labels for the new data are balanced/correctly assigned.
4.3 EXTENSION TO NEURAL NETWORKS
In more complex models like convolution neural network (CNN), the model can be divided into two parts: hidden layers which transform the input data x→ f(x) and the classification model which applies the softmax function (or sigmoid function for binary classification) on f(x). Our results can be extended to CNN for regularization methods since softmax function is just a multi-classes logistic regression. For augmentation methods, we also believe that an interpolation of input data implies an interpolation of transformed data after hidden layers. For simplicity, we assume the nonlinear layers in the CNN only consist of ReLU and max-pooling. Obviously, both ReLU and max-pooling satisfy the following properties: let x̃ = αx(i) + (1− α)x(j), then
0 ≤ ReLU(x̃) ≤ αReLU(x(i)) + (1− α)ReLU(x(j)), 1 2 (αmax-p(x(i)) + (1− α)max-p(x(j))) ≤ max-p(x̃) ≤ αmax-p(x(i)) + (1− α)max-p(x(j)),
(2)
where max-p represents max-pooling. The first inequality in (2) holds when each argument of x(i), x(j) is non-negative. Given the pooling layer after the ReLU layer, the assumption is valid. It further implies that f(x̃) ≤ αf(x(i)) + (1− α)f(x(j)), which means augmentation methods on the data can be considered as augmentation on the logits. Then we may use our framework on the logistic regression.
5 DISCUSSION
We have proposed feature smoothing, a straightforward data augmentation method as an efficient way to increase adversarial robustness of neural networks. In our experiments, feature smoothing combined with logit squeezing shows the best performance in both MNIST and CIFAR10. We found that α ∈ [0.2, 0.4] shows similar results when we apply PGD attack with total perturbation threshold as e = 0.3. If we use smaller perturbations, smaller α, for example α = 0.1 for e = 0.1, we also observe good results. As a future plan, more possibilities of combinations of different techniques can still be further explored.
We also built a framework to explain how different regularization methods and augmentation methods improve the estimation of decision boundaries for logistic regression. Our main theorems show that all of these methods achieve smaller estimation variance of the decision boundary while keeping the unbiasedness of the estimation. In some extreme cases, for example, correctly labeled data around boundary for one specific class (7), the vanilla logistic regression is incorrectly estimated the boundary for sure but all of the above methods resolve the problem. We also extend the analysis to neural networks based on two facts: (1) the softmax regression is a generalized form of logistic regression in multi-class classification problem; (2)the activation functions like Relu and max-pooling can both keep linear inequalities Eq. (2).
ACKNOWLEDGEMENT
We would like to thank Dr. Jean-Marc Langlois and Dr. Alyssa Glass for their valuable inputs. We thank Dr. Harini Hannan for providing detailed explanation of her work, so we can successfully replicate experiment results from her paper. We also thank Weiqiang Shi for providing engineering support and Dr. Hua Guo for helpful feedback on drafts of this article.
A PROOFS
The proofs of Theorem 4.1∼ 4.3 are derived in this section. We first focus on binary logistic regression in the proofs.
A.1 PROOF THEOREM 4.1
Let ŵ and b̂ denote the estimates of w and b. The decreasing of variance is mainly achieved by two parts: (1) estimated ŵ and b̂ with smaller magnitude; (2) bias-variance trade-off. We first show that adding regularizers always producew with smaller magnitude, which lead to smaller variance. Then we show that the bias of p̂ introduced by penalties also leads to a smaller variance, essentially when p̂ is closer to 0.5 than the true p.
Based on the Fisher’s Information, when estimated parameters are MLE, the variances of ŵ and b̂ are given by:
var(ŵ) = (Exx 2[p̂(1− p̂)])−1, (3)
var(b̂) = (Ex[p̂(1− p̂)])−1. (4)
The decision boundary is {x : wx+b = 0}. So we use var(−b̂ŵ ) to measure the variance of estimation of boundary and by delta method
var(− b̂ ŵ ) = var(b̂) w2 +
b2
var(1/ŵ) + o(1).
Without loss of generality, we further assume b = 0 and w > 0. The variance of b̂/ŵ is then equal to
1
w2Ex[p̂(1− p̂)] .
If the distribution of x is a delta mass, i.e., Px = δx, the variance of b̂/ŵ can be further written as
g(w) = 1
w2p̂(1− p̂) ,
and the derivative with respect to w is
g′(w) = −2w − w2x(1− 2p̂)
w4p̂(1− p̂) .
Given our assumption that w > 0, it follows immediately that x(1 − 2p̂) < 0 and wx(1 − 2p̂) is monotonically decreasing. Moreover, as w → ∞, it yields wx(1 − 2p̂) → −∞. Therefore, there exists a constant C only depending on x so that for w > C we have −2w − w2x(1− 2p̂) > 0. We proved so far that if the estimation is MLE, the variance of boundary is increasing with w when w > C.
However, since we add one more regularization term to the original loss function, the estimator is no longer MLE. An approximation of var(b̂) is
Ey(y − p̂)2 p̂2(1− p̂)2 ,
where y ∼ Bernoulli(px). With regularization methods and based on our assumption of w, 0.5 < p̂ < px for x > 0 and px < p̂ < 0.5 for x < 0. Therefore
Ey(y − p̂)2
p̂2(1− p̂)2 ≤ 1 p̂(1− p̂) ,
which indicates a smaller variance of boundary than MLE estimation with the same ŵ and b̂.
A.2 PROOF OF THEOREM 4.2
When {x} is symmetric to the boundary x = 0, the data set can then be splitted into two groups, {xp} containing positive values and {xn} containing negative values, which are symmetric to each other. We further assume the corresponding labels are also approximately symmetric, which is easy to achieve when the sample size is large enough. The loss function is then automatically divided into: Lp = L({xp}) and Ln = L({xn}). The minimizer ŵ and b̂ of Lp and Ln have to be the same since the input data are symmetric. Then ŵ and b̂ are also the minimizers of the whole loss function L = Lp + Ln. That means for any x in positive part and its corresponding image x′ in negative part, the estimated pθ̂(y|x) + pθ̂(y
′|x′) is equal one, which indicates to an unbiased estimation of the boundary.
A.3 PROOF OF THEOREM 4.3
As mentioned in section 2, new data points
x̃ = αx(i) + (1− α)x(j),
are added as input in both methods, while the corresponding ỹ are estimated by either linear interpolation or dominant point of x(i) and x(j). Let p̃ denote the true probability of x̃, we claim that adding x̃ and ỹ ∼ Bernoulli(p̃) into input decreases the variance of boundary. Same as above, the variance of boundary can be estimated as
var(− b̂ ŵ ) =
1
w2p̂(1− p̂) ,
where x is generated from a distribution Px. When ỹ are generated from the true probability, the MLE estimation do not change. On the other hand, if x(i) and x(j) are from different classes and symmetric to the decision boundary, x̃ is closer to the boundary than x(i) and x(j), and therefore p̃(1− p̃) > px(i)(1− px(i)).
A.4 REMARK OF THEOREM 4.3
As mentioned above, given x from a delta mass distribution, the first derivative of variance with repect to w is given by:
g′(w) = −2w − w2x(1− 2p̂)
w4p̂(1− p̂) .
Without loss of generality, we still assume x > 0, b = 0, w > 0, then g′(w) = 0 gives w = 2x(1−2p̂) . g′(w) < 0 if w < 2x(1−2p̂) while g
′(w) > 0 otherwise. x̃ close to the boundary (0 when b = 0) also leads to probability p̃ close to 0.5, i.e. (1 - 2) close to 0. Therefore in a large range ofw ∈ (0, 2x(1−2p̂) ), the variance is decreasing with w. As a results, feature smoothing gives even smaller variance than the original MLE with x̃.
B SIMULATION RESULTS
B.1 LOGISTIC REGRESSION WITH HIGH-DIMENSIONAL FEATURE AND MULTIPLE CLASSES
B.2 IN-SYMMETRIC FEATURE
However, in real world, we can never have the perfect scenario that {x} is strictly symmetric distributed with respect to the boundary. We further argue that regularizers based on wx + b including label smoothing and logit squeezing are more tolerant to unbalanced data than weight decay regularized on w only. Figure 6 shows how these four methods perform when {x} are not symmetrically generated, in two scenarios: (a) data size is unbalanced with respect to decision boundary; (b) data distribution is unbalanced with respect to boundary. It is easy to see that label smoothing and logit squeezing are less sensitive to the distribution of x in both scenarios. In contrast, vanilla logistic regression and weight decay are more sensitive. Confidence intervals for vanilla
Estimated value and con dence interval of b/w in 3-d multiple classes
logistic regression become wider and do not behave consistently as x value changes; estimated mean decision boundary (p̂ = 0.5) for weight decay deviate from the true one, not as robust as other methods.
B.3 ANOTHER REALISTIC CASE
Now let us consider another data unbalance scenario under a different data generation mechanism and see how different methods perform. Note that all of our analysis above assumed that the data label y given data feature x were all generated from the true model 1. In other words, y are random numbers following Bernoulli (Multinomial for multi classes) distribution. Now let us consider another data generation mechanism which is also quite common in real world. Given input x, y is deterministic by an identity function y = I(wx+ b) instead of following a distribution.
But some classes have data around boundary and some do not, i.e. the distribution of x is unbalanced, for example most of x ∈ [−1,−0.9] ∪ [0.9, 1] but some x ∈ [0, 0.1]. Vanilla logistic regression fails to detect the true boundary in this case but both regularization methods and augmentation methods can improve the estimation (Figure 7). | 1. What is the main contribution of the paper regarding feature smoothing?
2. How does the reviewer assess the novelty and originality of the proposed method compared to other works like mixup?
3. What are the strengths and weaknesses of the paper's experimental analysis?
4. Do you have any concerns or questions about the theoretical motivation and discussion provided in the paper?
5. Are there any limitations or assumptions made in the paper that the reviewer finds questionable or unclear? | Review | Review
In this paper the authors introduce a novel method to defend against adversarial attacks that they call feature smoothing. The authors then discuss feature smoothing and related “cheap” data augmentation-based defenses against adversarial attacks in a nice general discussion. Next, the authors present empirical data comparing and contrasting the different methods they introduce as a means of constructing models that are robust to adversarial examples on MNIST and CIFAR10. The authors close by attempting to theoretically motivate their strategy in terms of reducing variance of the decision boundary.
Overall, I found this paper pleasant to read. However, it is unclear to me exactly how novel its contributions are. As discussed by the authors, there are strong similarities between feature smoothing and mixup although I did enjoy the unifying exposition presented in the text. It also seems as though the paper suffers from some simplifying assumptions considered by the authors. For example, in sec. 2 the authors claim that \tilde x will be closer to the decision boundary than x. However, this is only true if the decision boundary is convex.
I appreciated the extensive experiments run by the authors. However, I wish they had included results from adversarial training. It seems (looking at Madry’s paper) that the defense offered by these cheap methods is still significantly worse than adversarial training. I feel that some discussion of this is warranted even if the goal is to reduce computational complexity.
Finally, I am not sure what to make of the theory presented. While it is nice to see that the variance of the decision boundary is reduced by regularization in the case of 1-dimensional linear regression, I am not at all convinced by the authors generalization to neural networks. In particular, their discussion seems to only hold for one-hidden-layer networks. Although the authors don’t offer much clarity here. For example eq. 2 is literally just a statement that ReLU is a convex function. However, it is clearly the case that multiple layers of the network will violate this hypothesis. Overall, I did not find this discussion particularly compelling. |
ICLR | Title
Theoretical and Empirical Study of Adversarial Examples
Abstract
Many techniques are developed to defend against adversarial examples at scale. So far, the most successful defenses generate adversarial examples during each training step and add them to the training data. Yet, this brings significant computational overhead. In this paper, we investigate defenses against adversarial attacks. First, we propose feature smoothing, a simple data augmentation method with little computational overhead. Essentially, feature smoothing trains a neural network on virtual training data as an interpolation of features from a pair of samples, with the new label remaining the same as the dominant data point. The intuition behind feature smoothing is to generate virtual data points as close as adversarial examples, and to avoid the computational burden of generating data during training. Our experiments on MNIST and CIFAR10 datasets explore different combinations of known regularization and data augmentation methods and show that feature smoothing with logit squeezing performs best for both adversarial and clean accuracy. Second, we propose an unified framework to understand the connections and differences among different efficient methods by analyzing the biases and variances of decision boundary. We show that under some symmetrical assumptions, label smoothing, logit squeezing, weight decay, mix up and feature smoothing all produce an unbiased estimation of the decision boundary with smaller estimated variance. All of those methods except weight decay are also stable when the assumptions no longer hold.
1 INTRODUCTION
Machine learning models are often vulnerable to adversarial examples, which are maliciously designed to cause misclassification. In the area of computer vision, for instance, object recognition classifiers are much more likely to incorrectly classify images that have been modified with small, often inpreceptible perturbations. Similar problems also occur in natural language processing area, see (Miyato et al., 2017), where small perturbations of text can easily fool a label classification model. It is therefore important to develop machine learning models that are resistant to adversarial examples in situations where attacker may attemp to interfere, for example with autonomous vehicles (Papernot et al., 2017). Understanding the design mechanisms of adversarial examples can also help researchers to gain a better understanding of the performance of machine learning, especially deep learning models. In this paper, we introduce an efficient feature smoothing method to improve the adversarial robustness of neural networks and also build a theoretical framework to understand how different approaches help with the adversarial accuracy.
Different adversarial training methods have been proposed to increase robustness by augmenting training data with adversarial examples. Goodfellow et al. (2015) developed the fast gradient signed method (FGSM), which efficiently generated adversarial example by a “single-step” attack based on a linearization of the model’s loss. Their trained model is robust to single-step perturbations but remains vulnerable to more costly “multi-step” attacks. Madry et al. (2017) extended FGSM by proposing a multi-step variant FGSM, which is essentially projected gradient descent(PGD). They suggested that adversarial training with the PGD attack is a universal first order adversary defense, which means that models trained against PGD attacks are also resistant against many other first order attacks. Their PGD attacks consists of initializing the search for an adversarial examples at a random point within the allowed norm ball, then running several iterations of the basic iterative method to find an adversarial examples. Kannan et al. (2018) then introduced a logit pairing method (ALP)
which encourages the logits for pairs of examples and their corresponding adversarial examples to be similar. Logit pairing improves accuracy on adversarial examples over trainings based on PGD.
The above successful approaches performed data augmentation by generating adversarial examples during each training step, which will unfortunately bring significant computational burden to the training process. In contrast, more “efficient” training methods without hindering the training speed have also been shown to improve adversarial robustness (In this paper we refer “efficient” methods as data augmentation and regularization methods including mixup, label smoothing, logit squeezing, weight decay, and our proposed feature smoothing). Szegedy et al. (2016) proposed label smoothing, which trains a classifier using soft targets for the cross-entropy loss rather than the hard targets. The correct class is given a target probability of 1− α and the remaining α probability mass is divided uniformly among incorrect classes. Label smoothing reduces overfitting by preventing a network from assigning full probability to each training data, and also offers a small amount of robustness to adversarial examples (Kannan et al., 2018). Kannan et al. (2018) proposed a logit squeezing method which penalizes the logit of each input example. It is showed that combined with adding Gaussian noise into input examples, logit squeezing gave even better results than ALP in some datasets, for example MNIST and SHNV. Zhang et al. (2018) performed data augmentation by training the model on virtual input points as interpolation of two random examples from the training set and their labels, resulting in increasing both the robustness of adversarial examples and the accuracy in clean test data.
In parallel, many theorems have also been proposed to understand the power and existence of adversarial examples. Transferability is shown to be a common property of adversarial examples. Szegedy et al. (2014) and Papernot et al. (2016) found that adversarial examples generated based on a specific neural network can fool both the same neural network trained with different datasets and different neural networks trained with the same dataset. The existence of adversarial examples is still an open question. Possible reasons have been suggested in recent papers, such as low density (Szegedy et al., 2014; Pei et al., 2017), decision boundary too close to the training data (Tanay & Griffin, 2016). However, there are few papers theoretically explaining the similarities and differences between those methods, especially based on their estimation of decision boundaries. Goodfellow et al. (2015) discussed the differences between weight decay and adversarial training by comparing their loss functions in logistic regression, but didn’t show how these two methods affect the estimation and accuracy.
The above discussion leaves us two questions:
• Without adding any computational burden during training, these “efficient” methods mainly benefit from data augmentation and regularization, and as a result, resist against adversarial examples to some extent. As most of them are not specifically designed for resisting against adversarial examples, can we develop an “efficient” approach specifically designed to be robust to adversarial examples? • What are the connections and differences among these “efficient” methods? Can we build a
unified framework to analyze them?
Motivated by these two questions, we investigate defenses against adversarial attacks, and the contribution is two-fold. We first propose feature smoothing, a data interpolation method that softens the features of input. We show that feature smoothing obtains better performance than other “efficient” approaches on both MNIST and CIFAR10. We also observe the best performance when combining our feature smoothing method and logit squeezing strategy, among all “efficient” methods. We also propose a unified framework to understand how different “efficient” approaches influence the estimation of decision boundary. In particular, based on both simulations and theoretical analysis of logistic regression, we show that under some symmetrical assumption, label smoothing, logit squeezing, weight decay, mixup, feature smoothing and data extrapolation all give an unbiased estimation of boundary with smaller estimation variance. But regularization with weight decay is more sensitive when the assumption may not hold. We believe it is the reason weight decay can hurt the accuracy in clean test data. Our framework are also partially extended to deep convolutional neural networks.
The paper is organized as follows. Section 2 presents our proposed method and other related “efficient” methods. Section 3 reports the performance of feature smoothing against other “efficient” methods. We conduct theoretical analysis and explore the connections and differences among different methods in Section 4. The last section concludes.
2 METHOD
Following the idea of adversarial training, we propose feature smoothing method which also adds new data into the training set to improve the robustness. Other than generating adversarial examples based on current model, feature smoothing mimics adversarial examples by data interpolation and adding Gaussian noise directly based on the original training data. We will introduce feature smoothing and discuss several related methods in the following.
2.1 FEATURE SMOOTHING
In a classification problem, we aim to recover the unknown decision boundary based on the training data (Figure 1(a)). As long as the decision boundary is correctly estimated, there will be no adversarial examples. Tanay & Griffin (2016) suggested that neural networks which estimate decision boundary too close to the training data causes adversarial problems. The incorrect estimation of boundary may be caused by low density (Szegedy et al., 2014) of input data where adversarial examples exists. In adversarial training, the estimation is improved by adding adversarial examples into input (Figure 1(b)) during each step.
Based on this idea, if we are able to generate ‘low density’ data directly based on the original training set, we can also improve the estimation as what adversarial training does but with much smaller computational cost. We now introduce feature smoothing, a simple data augmentation approach which generates new virtual training data as interpolation of features from a pair of random samples. Virtual training data are constructed as follows:
x̃(i) = (1− α)x(i) + αx(j), ỹ(i) = y(i),
where (x(i), y(i)) and (x(j), y(j)) are two examples drawn randomly from our training data, and 0 ≤ α < 0.5. When x(i) and x(j) belong to different classes, and the interval between these two data points intercept with the decision boundary only once, x̃i is closer to the boundary than xi or xj . Figure 1(c) shows that adding new data interpolated between classes can help with the estimation of decision boundary.
Furthermore, Gaussian noise also helps extend the range of x. Figure 1(d) shows that adding Gaussian random noise with proper variance into input can also push the estimated boundary closer to the true boundary compared against original clean data. Hence we add Gaussian noise into our feature smoothing method as well:
x̃(i) = P (x(i) + ), ỹ(i) = y(i),
where ∼ Normal(0, σ2) and P (x) projects x to the range of original data. To distinguish data interpolation part and Gaussian noise part, we use ‘feature smoothing’ only referring to data interpolation and ‘noise’ for the Gaussian noise part in the following. A detailed illustration of how feature smoothing helps the estimation of boundary is discussed in Section 4.
2.2 RELATED METHODS
Though starting from different intuitions, feature smoothing turns to be very similar with mixup (Zhang et al., 2018). In mixup, additional virtual data points are generated by interpolating both features and labels of the original training data:
x̃ = (1− α)x(i) + αx(j), ỹ = (1− α)y(i) + αy(j), where α ∈ [0, 1]. Mixup can be understood as a form of data augmentation that encourages the model to behave linearly in-between training examples. Zhang et al. (2018) argued that this linear behavior reduces the amount of undesirable oscillations when predicting data outside the training examples. On the contrary, our feature smoothing method includes the interpolations with new label remaining the same as the dominant data point, which maintains the S-shaped curve of logistic model and also allow feature smoothing easier to be combined with regularization methods. More detailed comparison can be found in Sec. 4.
Label smoothing (LaS) and logit squezzing (LoS) are other two efficient ways which improve the adversarial accuracy. Let y ∈ RK be one-hot label for K classes, label smoothing (Szegedy et al., 2016) softens the target by replacing y with
ỹ = δ
K − 1 (1− y) + (1− δ)y,
where δ = 0.1 is shown to be the best choice (Pereyra et al., 2017). Assume we train a model with parameters θ on a batch of m data points {(x(i),y(i)), i = 1, 2, . . . ,m}, y(i) ∈ {0, 1}K . Let f(x;θ) denote the mapping function from x to logits of the model. Let L(clean) denote the cross entropy loss for the batch of data points as:
L(clean) = − 1 m m∑ i=1 K∑ j=1 y (i) j log(pθ(y (i) j |x (i))).
The loss function of label smoothing can also be achieved by some calculation:
LLaS = − 1
m m∑ i=1 K∑ j=1 ỹ (i) j log(pθ(y (i) j |x (i))) = L(clean) − 1 m m∑ i=1 K∑ j=1 1−Ky(i)j K − 1 δfj(x (i),θ).
Notice that if we assume our model obtains a good estimation of f(x, θ), then when yj = 0, fj(x, θ) < 0 and when y = 1, fj(x, θ) > 0. In a binary classification case, LLaS can be written as L(clean) + δ|f(x, θ)|, which further indicates that label smoothing predicts logits with smaller magnitude and therefore avoids overfitting.
Similarly, logit squeezing (Kannan et al., 2018) applies a L2 norm on the logits directly as a penalty of over-confidence:
LLoS = L clean +
λ
m m∑ i=1 ||f(x(i))||2,
where Lclean is the original loss of neural networks and f(x(i)) is the logit of image x(i) as above.
Weight decay is another well known regularizer which efficiently reduces overfitting of neural networks by adding L1 or L2 penalty of weight w,
Lwd = L (clean) + λ||w||22.
However, weight decay is shown to be not very helpful for adversarial examples compared to label smoothing and logit squeezing, which will be discussed in Sec 4.
Combination of different approaches In feature smoothing and mixup, we generate new data points as linear combination of xi and xj . For generating the exact y value of these virtual points, mixup uses a linear interpolation for estimation, while feature smoothing chooses the dominant label. Nevertheless, it is also possible that feature smoothing or mixup also adds mislabeled noises into the training data, especially when xi and xj are not symmetric to the boundary. In that situation, label smoothing and logit squeezing are better ways to avoid overfitting. So we also consider to combine these methods together to gain a better test and adversarial accuracy.
3 EXPERIMENTS
3.1 RESULTS ON MNIST
We experiment feature smoothing, label smoothing, mix up, logit squeezing and their possible combinations on MNIST, with the results summarized in Table 1. We find that combining feature smoothing and logit squeezing give the best performance in both clean test data and adversarial examples. For all experiments in this section, we train our models for 200 epochs and use Adam for our optimizer with a learning rate at 10−4. Random noises are added into the training data in several methods, with the same σ value of 0.5.
For MNIST, when α ranges between 0.2 and 0.4 we observe similar performance for feature smoothing, whereas for large α at 0.5, too much noise in data label brings underfitting for feature smoothing. We use a final α value of 0.3 for reporting results in Table 1. Chosen by cross validation, we use α ∈ Beta(8, 8) for mixup, and δ = 0.1 for label smoothing. In logit squeezing, we use the weight λ of 0.2 as experimented in Kannan et al. (2018). In feature smoothing and mixup, 10 new data points are generated on each batch with batch size m = 50.
We use the LeNet model as Madry et al. (2017) and also apply the same attack parameters as they provided. After scaling the range of images pixels into [0, 1] (divided by 255), we apply perturbation per step of 0.01, 40 total attack steps with 1 random start, and the total adversarial perturbation threshold set as 0.3. Similar with Madry et al. (2017), we also generate black box examples for MNIST by independently initializing and training a copy of the LeNet model, then generate PGD attack based on that model. Both cross entropy loss and correct-wrong loss are used.
Each single method improves a small amount of the adversarial accuracy, but combinations of them lead to a much better performance (Table 1). Logit squeezing combined with feature smoothing and Gaussian random noise achieves the best performance among all those “efficient” methods.
3.2 RESULTS ON CIFAR10
We follow Madry’s lab for the experiments in CIFAR10. For all experiments in this section, we train our models for 80000 global steps with batch size m = 128 in each step. We use Momentum at 0.9 for our optimizer with a learning rate at 0.1 for the begining, 0.01 after 40000 global steps and 0.004 after 60000 global steps. Weight decay with λ = 0.0002 is also applied to all experiments. We use α ∈ Beta(8, 8) for mixup, δ = 0.1 for label smoothing, λ = 0.1 for logit squeezing , α = 0.2 for feature smoothing, and σ = 0.5 for Gaussian random noise. In feature smoothing and mixup, 10 new data points are randomly generated on each batch with batch size m = 128.
We apply the ResNet model and the same attack parameters as they used. We use perturbation per step of 2.0, 20 total attack steps with 1 random start and the total adversarial perturbation threshold set as 8. The black box adversarial examples are also generated by independently initializing and training a same model. Logit squeezing combined with feature smoothing and Gaussian random noise still performs the best among all of the ‘efficient’ methods (Table 2).
4 THEORETICAL EXPLANATIONS
In this section, we show that the above “efficient” methods increase neural networks’ adversarial robustness by improving the estimation of the decision boundaries. The improvement relies on two components: (1) unbiased estimation of boundary; (2) smaller estimation of variance. Given the same training data, these methods estimate the boundary closer to the true boundary than the original neural networks. Our simulations and theoretical results mainly focus on logistic regression. The idea is then discussed with deep convolutional neural networks.
To gain some intuitions on how the above methods improve the estimation, we start from logistic regression model with binary classes. Assume a feature vector x follows some distribution Px in Rd, w ∈ Rd and b ∈ R, then the corresponding label y follows a Bernoulli distribution with probabilities given by:
p := P (y = 1) = 1
1 + e−(wx+b) , P (y = 0) =
1
1 + ewx+b . (1)
Based on the changing the loss function, we divide these methods into two categories: (1) regularization methods: label smoothing (LaS), logit squeezing (LoS), and weight decay (wd); (2) augmentation methods: mixup and feature smoothing. Regularization methods add penalty term to loss function directly, while augmentation methods modify the loss function by adding new virtual data into it. We analyze the properties of these methods based on the two categories in the following subsections and the proofs of the theorems are included in Appendix.
4.1 REGULARIZATION METHODS
Our main theorem shows that all of the regularization methods estimate the decision boundary with smaller variance and the estimation is unbiased when x is symmetric with the boundary. With one-dimensional x and binary classes, the variance of decision boundary can be defined as: var( bw ), w 6= 0. Let p̂ denote the estimated probability. The confidence interval of p̂, which indicates the confidence interval of boundary, is narrowed down with the regularization methods, especially when the support of the distribution of x is far away from boundary (Figure 2(a)). As the value of w increases, the corresponding variances for w, b and decision boundary are also better controlled with regularization methods than the vanilla logistic regression (Figure. 3). For the vanilla logistic regression, when w is large enough, the variance of boundary grows in an exponential rate with w. But with these regularization methods, variance keeps decreasing even when w is really large. This observation is also true with higher dimensions and multiple classes (Figure 5). Inspired by our observation in the simulation study, we prove the following theorems in one dimension (1-D) to further explain the phenomena in the simulations.
Theorem 4.1. Label smoothing, logit squeezing, and weight decay all estimate the decision boundary with smaller variance in logistic regression model in 1-D.
Theorem 4.2. When x is symmetric with respect to boundary, label smoothing, logit squeezing, and weight decay have unbiased estimation of boundary in logistic regression model in 1-D.
The symmetric assumption is not unrealistic for imaging classification problems, since we can always assume the true boundary is in the middle of two classes. However, the stability of those methods when this assumption cannot hold is also important. We also show that label smoothing and logit squeezing is relatively more stable than weight decay when x is asymmetric (Figure 7).
4.2 AUGMENTATION METHODS
Other than adding regularization to the loss function directly, adversarial training, mixup and feature smoothing all ‘improve’ the loss function by changing the distribution of x. Figure 4 shows how the distribution of x influences the estimation of the boundary in different cases. It is natural to see that when the data are pushed closer to the true boundary, the boundary estimation becomes better due to reduced variances. Following the same analysis above, x being around boundary leads to smaller estimated p(1− p), which yields smaller variance for w, b, and the boundary. When the symmetrical assumption is violated, more careful selection of original data points is needed to avoid adding too much noise into training set.
Following the above explanation, our theorem 4.3 shows that adding data around boundary with labels generated from the true distribution into training can narrow down the variance of boundary, even though the sample size remains the same. Adversarial training, mixup and feature smoothing estimated the labels in different ways. We further show that feature smoothing achieve smaller variance than mixup when α is properly chosen (sec A.4).
Theorem 4.3. Adding data around boundary narrow down the variance of boundary estimation by making the distribution of x closer to boundary. The estimation is unbiased if all labels for the new data are balanced/correctly assigned.
4.3 EXTENSION TO NEURAL NETWORKS
In more complex models like convolution neural network (CNN), the model can be divided into two parts: hidden layers which transform the input data x→ f(x) and the classification model which applies the softmax function (or sigmoid function for binary classification) on f(x). Our results can be extended to CNN for regularization methods since softmax function is just a multi-classes logistic regression. For augmentation methods, we also believe that an interpolation of input data implies an interpolation of transformed data after hidden layers. For simplicity, we assume the nonlinear layers in the CNN only consist of ReLU and max-pooling. Obviously, both ReLU and max-pooling satisfy the following properties: let x̃ = αx(i) + (1− α)x(j), then
0 ≤ ReLU(x̃) ≤ αReLU(x(i)) + (1− α)ReLU(x(j)), 1 2 (αmax-p(x(i)) + (1− α)max-p(x(j))) ≤ max-p(x̃) ≤ αmax-p(x(i)) + (1− α)max-p(x(j)),
(2)
where max-p represents max-pooling. The first inequality in (2) holds when each argument of x(i), x(j) is non-negative. Given the pooling layer after the ReLU layer, the assumption is valid. It further implies that f(x̃) ≤ αf(x(i)) + (1− α)f(x(j)), which means augmentation methods on the data can be considered as augmentation on the logits. Then we may use our framework on the logistic regression.
5 DISCUSSION
We have proposed feature smoothing, a straightforward data augmentation method as an efficient way to increase adversarial robustness of neural networks. In our experiments, feature smoothing combined with logit squeezing shows the best performance in both MNIST and CIFAR10. We found that α ∈ [0.2, 0.4] shows similar results when we apply PGD attack with total perturbation threshold as e = 0.3. If we use smaller perturbations, smaller α, for example α = 0.1 for e = 0.1, we also observe good results. As a future plan, more possibilities of combinations of different techniques can still be further explored.
We also built a framework to explain how different regularization methods and augmentation methods improve the estimation of decision boundaries for logistic regression. Our main theorems show that all of these methods achieve smaller estimation variance of the decision boundary while keeping the unbiasedness of the estimation. In some extreme cases, for example, correctly labeled data around boundary for one specific class (7), the vanilla logistic regression is incorrectly estimated the boundary for sure but all of the above methods resolve the problem. We also extend the analysis to neural networks based on two facts: (1) the softmax regression is a generalized form of logistic regression in multi-class classification problem; (2)the activation functions like Relu and max-pooling can both keep linear inequalities Eq. (2).
ACKNOWLEDGEMENT
We would like to thank Dr. Jean-Marc Langlois and Dr. Alyssa Glass for their valuable inputs. We thank Dr. Harini Hannan for providing detailed explanation of her work, so we can successfully replicate experiment results from her paper. We also thank Weiqiang Shi for providing engineering support and Dr. Hua Guo for helpful feedback on drafts of this article.
A PROOFS
The proofs of Theorem 4.1∼ 4.3 are derived in this section. We first focus on binary logistic regression in the proofs.
A.1 PROOF THEOREM 4.1
Let ŵ and b̂ denote the estimates of w and b. The decreasing of variance is mainly achieved by two parts: (1) estimated ŵ and b̂ with smaller magnitude; (2) bias-variance trade-off. We first show that adding regularizers always producew with smaller magnitude, which lead to smaller variance. Then we show that the bias of p̂ introduced by penalties also leads to a smaller variance, essentially when p̂ is closer to 0.5 than the true p.
Based on the Fisher’s Information, when estimated parameters are MLE, the variances of ŵ and b̂ are given by:
var(ŵ) = (Exx 2[p̂(1− p̂)])−1, (3)
var(b̂) = (Ex[p̂(1− p̂)])−1. (4)
The decision boundary is {x : wx+b = 0}. So we use var(−b̂ŵ ) to measure the variance of estimation of boundary and by delta method
var(− b̂ ŵ ) = var(b̂) w2 +
b2
var(1/ŵ) + o(1).
Without loss of generality, we further assume b = 0 and w > 0. The variance of b̂/ŵ is then equal to
1
w2Ex[p̂(1− p̂)] .
If the distribution of x is a delta mass, i.e., Px = δx, the variance of b̂/ŵ can be further written as
g(w) = 1
w2p̂(1− p̂) ,
and the derivative with respect to w is
g′(w) = −2w − w2x(1− 2p̂)
w4p̂(1− p̂) .
Given our assumption that w > 0, it follows immediately that x(1 − 2p̂) < 0 and wx(1 − 2p̂) is monotonically decreasing. Moreover, as w → ∞, it yields wx(1 − 2p̂) → −∞. Therefore, there exists a constant C only depending on x so that for w > C we have −2w − w2x(1− 2p̂) > 0. We proved so far that if the estimation is MLE, the variance of boundary is increasing with w when w > C.
However, since we add one more regularization term to the original loss function, the estimator is no longer MLE. An approximation of var(b̂) is
Ey(y − p̂)2 p̂2(1− p̂)2 ,
where y ∼ Bernoulli(px). With regularization methods and based on our assumption of w, 0.5 < p̂ < px for x > 0 and px < p̂ < 0.5 for x < 0. Therefore
Ey(y − p̂)2
p̂2(1− p̂)2 ≤ 1 p̂(1− p̂) ,
which indicates a smaller variance of boundary than MLE estimation with the same ŵ and b̂.
A.2 PROOF OF THEOREM 4.2
When {x} is symmetric to the boundary x = 0, the data set can then be splitted into two groups, {xp} containing positive values and {xn} containing negative values, which are symmetric to each other. We further assume the corresponding labels are also approximately symmetric, which is easy to achieve when the sample size is large enough. The loss function is then automatically divided into: Lp = L({xp}) and Ln = L({xn}). The minimizer ŵ and b̂ of Lp and Ln have to be the same since the input data are symmetric. Then ŵ and b̂ are also the minimizers of the whole loss function L = Lp + Ln. That means for any x in positive part and its corresponding image x′ in negative part, the estimated pθ̂(y|x) + pθ̂(y
′|x′) is equal one, which indicates to an unbiased estimation of the boundary.
A.3 PROOF OF THEOREM 4.3
As mentioned in section 2, new data points
x̃ = αx(i) + (1− α)x(j),
are added as input in both methods, while the corresponding ỹ are estimated by either linear interpolation or dominant point of x(i) and x(j). Let p̃ denote the true probability of x̃, we claim that adding x̃ and ỹ ∼ Bernoulli(p̃) into input decreases the variance of boundary. Same as above, the variance of boundary can be estimated as
var(− b̂ ŵ ) =
1
w2p̂(1− p̂) ,
where x is generated from a distribution Px. When ỹ are generated from the true probability, the MLE estimation do not change. On the other hand, if x(i) and x(j) are from different classes and symmetric to the decision boundary, x̃ is closer to the boundary than x(i) and x(j), and therefore p̃(1− p̃) > px(i)(1− px(i)).
A.4 REMARK OF THEOREM 4.3
As mentioned above, given x from a delta mass distribution, the first derivative of variance with repect to w is given by:
g′(w) = −2w − w2x(1− 2p̂)
w4p̂(1− p̂) .
Without loss of generality, we still assume x > 0, b = 0, w > 0, then g′(w) = 0 gives w = 2x(1−2p̂) . g′(w) < 0 if w < 2x(1−2p̂) while g
′(w) > 0 otherwise. x̃ close to the boundary (0 when b = 0) also leads to probability p̃ close to 0.5, i.e. (1 - 2) close to 0. Therefore in a large range ofw ∈ (0, 2x(1−2p̂) ), the variance is decreasing with w. As a results, feature smoothing gives even smaller variance than the original MLE with x̃.
B SIMULATION RESULTS
B.1 LOGISTIC REGRESSION WITH HIGH-DIMENSIONAL FEATURE AND MULTIPLE CLASSES
B.2 IN-SYMMETRIC FEATURE
However, in real world, we can never have the perfect scenario that {x} is strictly symmetric distributed with respect to the boundary. We further argue that regularizers based on wx + b including label smoothing and logit squeezing are more tolerant to unbalanced data than weight decay regularized on w only. Figure 6 shows how these four methods perform when {x} are not symmetrically generated, in two scenarios: (a) data size is unbalanced with respect to decision boundary; (b) data distribution is unbalanced with respect to boundary. It is easy to see that label smoothing and logit squeezing are less sensitive to the distribution of x in both scenarios. In contrast, vanilla logistic regression and weight decay are more sensitive. Confidence intervals for vanilla
Estimated value and con dence interval of b/w in 3-d multiple classes
logistic regression become wider and do not behave consistently as x value changes; estimated mean decision boundary (p̂ = 0.5) for weight decay deviate from the true one, not as robust as other methods.
B.3 ANOTHER REALISTIC CASE
Now let us consider another data unbalance scenario under a different data generation mechanism and see how different methods perform. Note that all of our analysis above assumed that the data label y given data feature x were all generated from the true model 1. In other words, y are random numbers following Bernoulli (Multinomial for multi classes) distribution. Now let us consider another data generation mechanism which is also quite common in real world. Given input x, y is deterministic by an identity function y = I(wx+ b) instead of following a distribution.
But some classes have data around boundary and some do not, i.e. the distribution of x is unbalanced, for example most of x ∈ [−1,−0.9] ∪ [0.9, 1] but some x ∈ [0, 0.1]. Vanilla logistic regression fails to detect the true boundary in this case but both regularization methods and augmentation methods can improve the estimation (Figure 7). | 1. What is the focus and contribution of the paper regarding adversarial defense?
2. What are the strengths and weaknesses of the proposed feature smoothing method compared to other methods like mixup?
3. Do you have any concerns or questions regarding the proof of Theorem 4.1, particularly with assumptions made and omitted details?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions for improving the paper, such as providing empirical evaluations for the modified network adaptation? | Review | Review
The authors proposed a feature smoothing method without adding any computational burden for defensing against adversarial examples. The idea is that both feature smoothing and Gaussian noise can help extend the range of data. Moreover, the authors combined these methods together to gain a better test and adversarial accuracy. They further proved 3 theorems to try to analyze the biases and variances of decision boundary based on the fisher information and delta method.
In my opinion, the main contribution of this paper is to prove that the boundary variance will decrease due to adding one additional regularization term to the loss function.
Main comments:
1. The proposed feature smoothing method seems less novel to me. In contrast to the mixup method, the proposed method appears to remove the label smoothing part, so it is better to explain or justify why this could be better theoretically. Moreover, in the PGD and PGD-cw results, the performance is not as good as the Gaussian random noise method. Can the authors offer any discussion or comments on the possible reasons?
2. Some details of the proof of Theorem 4.1 seemed to be omitted. I am a bit confused about this.
a. “Without loss of generality, we further assume b = 0 and w > 0.” With smaller magnitude, b=0 is reasonable, but why to assume w>0?
b. Could you present the derivation details or the backing theory of the approximation of var(b), when one more regularization term are added?
3. In addition, a method of modifying the network is proposed to adapt to the feature smoothing method. However, no experimental results are reported to support its effectiveness. I would believe some empirical evaluations may further strengthen the paper. |
ICLR | Title
Theoretical and Empirical Study of Adversarial Examples
Abstract
Many techniques are developed to defend against adversarial examples at scale. So far, the most successful defenses generate adversarial examples during each training step and add them to the training data. Yet, this brings significant computational overhead. In this paper, we investigate defenses against adversarial attacks. First, we propose feature smoothing, a simple data augmentation method with little computational overhead. Essentially, feature smoothing trains a neural network on virtual training data as an interpolation of features from a pair of samples, with the new label remaining the same as the dominant data point. The intuition behind feature smoothing is to generate virtual data points as close as adversarial examples, and to avoid the computational burden of generating data during training. Our experiments on MNIST and CIFAR10 datasets explore different combinations of known regularization and data augmentation methods and show that feature smoothing with logit squeezing performs best for both adversarial and clean accuracy. Second, we propose an unified framework to understand the connections and differences among different efficient methods by analyzing the biases and variances of decision boundary. We show that under some symmetrical assumptions, label smoothing, logit squeezing, weight decay, mix up and feature smoothing all produce an unbiased estimation of the decision boundary with smaller estimated variance. All of those methods except weight decay are also stable when the assumptions no longer hold.
1 INTRODUCTION
Machine learning models are often vulnerable to adversarial examples, which are maliciously designed to cause misclassification. In the area of computer vision, for instance, object recognition classifiers are much more likely to incorrectly classify images that have been modified with small, often inpreceptible perturbations. Similar problems also occur in natural language processing area, see (Miyato et al., 2017), where small perturbations of text can easily fool a label classification model. It is therefore important to develop machine learning models that are resistant to adversarial examples in situations where attacker may attemp to interfere, for example with autonomous vehicles (Papernot et al., 2017). Understanding the design mechanisms of adversarial examples can also help researchers to gain a better understanding of the performance of machine learning, especially deep learning models. In this paper, we introduce an efficient feature smoothing method to improve the adversarial robustness of neural networks and also build a theoretical framework to understand how different approaches help with the adversarial accuracy.
Different adversarial training methods have been proposed to increase robustness by augmenting training data with adversarial examples. Goodfellow et al. (2015) developed the fast gradient signed method (FGSM), which efficiently generated adversarial example by a “single-step” attack based on a linearization of the model’s loss. Their trained model is robust to single-step perturbations but remains vulnerable to more costly “multi-step” attacks. Madry et al. (2017) extended FGSM by proposing a multi-step variant FGSM, which is essentially projected gradient descent(PGD). They suggested that adversarial training with the PGD attack is a universal first order adversary defense, which means that models trained against PGD attacks are also resistant against many other first order attacks. Their PGD attacks consists of initializing the search for an adversarial examples at a random point within the allowed norm ball, then running several iterations of the basic iterative method to find an adversarial examples. Kannan et al. (2018) then introduced a logit pairing method (ALP)
which encourages the logits for pairs of examples and their corresponding adversarial examples to be similar. Logit pairing improves accuracy on adversarial examples over trainings based on PGD.
The above successful approaches performed data augmentation by generating adversarial examples during each training step, which will unfortunately bring significant computational burden to the training process. In contrast, more “efficient” training methods without hindering the training speed have also been shown to improve adversarial robustness (In this paper we refer “efficient” methods as data augmentation and regularization methods including mixup, label smoothing, logit squeezing, weight decay, and our proposed feature smoothing). Szegedy et al. (2016) proposed label smoothing, which trains a classifier using soft targets for the cross-entropy loss rather than the hard targets. The correct class is given a target probability of 1− α and the remaining α probability mass is divided uniformly among incorrect classes. Label smoothing reduces overfitting by preventing a network from assigning full probability to each training data, and also offers a small amount of robustness to adversarial examples (Kannan et al., 2018). Kannan et al. (2018) proposed a logit squeezing method which penalizes the logit of each input example. It is showed that combined with adding Gaussian noise into input examples, logit squeezing gave even better results than ALP in some datasets, for example MNIST and SHNV. Zhang et al. (2018) performed data augmentation by training the model on virtual input points as interpolation of two random examples from the training set and their labels, resulting in increasing both the robustness of adversarial examples and the accuracy in clean test data.
In parallel, many theorems have also been proposed to understand the power and existence of adversarial examples. Transferability is shown to be a common property of adversarial examples. Szegedy et al. (2014) and Papernot et al. (2016) found that adversarial examples generated based on a specific neural network can fool both the same neural network trained with different datasets and different neural networks trained with the same dataset. The existence of adversarial examples is still an open question. Possible reasons have been suggested in recent papers, such as low density (Szegedy et al., 2014; Pei et al., 2017), decision boundary too close to the training data (Tanay & Griffin, 2016). However, there are few papers theoretically explaining the similarities and differences between those methods, especially based on their estimation of decision boundaries. Goodfellow et al. (2015) discussed the differences between weight decay and adversarial training by comparing their loss functions in logistic regression, but didn’t show how these two methods affect the estimation and accuracy.
The above discussion leaves us two questions:
• Without adding any computational burden during training, these “efficient” methods mainly benefit from data augmentation and regularization, and as a result, resist against adversarial examples to some extent. As most of them are not specifically designed for resisting against adversarial examples, can we develop an “efficient” approach specifically designed to be robust to adversarial examples? • What are the connections and differences among these “efficient” methods? Can we build a
unified framework to analyze them?
Motivated by these two questions, we investigate defenses against adversarial attacks, and the contribution is two-fold. We first propose feature smoothing, a data interpolation method that softens the features of input. We show that feature smoothing obtains better performance than other “efficient” approaches on both MNIST and CIFAR10. We also observe the best performance when combining our feature smoothing method and logit squeezing strategy, among all “efficient” methods. We also propose a unified framework to understand how different “efficient” approaches influence the estimation of decision boundary. In particular, based on both simulations and theoretical analysis of logistic regression, we show that under some symmetrical assumption, label smoothing, logit squeezing, weight decay, mixup, feature smoothing and data extrapolation all give an unbiased estimation of boundary with smaller estimation variance. But regularization with weight decay is more sensitive when the assumption may not hold. We believe it is the reason weight decay can hurt the accuracy in clean test data. Our framework are also partially extended to deep convolutional neural networks.
The paper is organized as follows. Section 2 presents our proposed method and other related “efficient” methods. Section 3 reports the performance of feature smoothing against other “efficient” methods. We conduct theoretical analysis and explore the connections and differences among different methods in Section 4. The last section concludes.
2 METHOD
Following the idea of adversarial training, we propose feature smoothing method which also adds new data into the training set to improve the robustness. Other than generating adversarial examples based on current model, feature smoothing mimics adversarial examples by data interpolation and adding Gaussian noise directly based on the original training data. We will introduce feature smoothing and discuss several related methods in the following.
2.1 FEATURE SMOOTHING
In a classification problem, we aim to recover the unknown decision boundary based on the training data (Figure 1(a)). As long as the decision boundary is correctly estimated, there will be no adversarial examples. Tanay & Griffin (2016) suggested that neural networks which estimate decision boundary too close to the training data causes adversarial problems. The incorrect estimation of boundary may be caused by low density (Szegedy et al., 2014) of input data where adversarial examples exists. In adversarial training, the estimation is improved by adding adversarial examples into input (Figure 1(b)) during each step.
Based on this idea, if we are able to generate ‘low density’ data directly based on the original training set, we can also improve the estimation as what adversarial training does but with much smaller computational cost. We now introduce feature smoothing, a simple data augmentation approach which generates new virtual training data as interpolation of features from a pair of random samples. Virtual training data are constructed as follows:
x̃(i) = (1− α)x(i) + αx(j), ỹ(i) = y(i),
where (x(i), y(i)) and (x(j), y(j)) are two examples drawn randomly from our training data, and 0 ≤ α < 0.5. When x(i) and x(j) belong to different classes, and the interval between these two data points intercept with the decision boundary only once, x̃i is closer to the boundary than xi or xj . Figure 1(c) shows that adding new data interpolated between classes can help with the estimation of decision boundary.
Furthermore, Gaussian noise also helps extend the range of x. Figure 1(d) shows that adding Gaussian random noise with proper variance into input can also push the estimated boundary closer to the true boundary compared against original clean data. Hence we add Gaussian noise into our feature smoothing method as well:
x̃(i) = P (x(i) + ), ỹ(i) = y(i),
where ∼ Normal(0, σ2) and P (x) projects x to the range of original data. To distinguish data interpolation part and Gaussian noise part, we use ‘feature smoothing’ only referring to data interpolation and ‘noise’ for the Gaussian noise part in the following. A detailed illustration of how feature smoothing helps the estimation of boundary is discussed in Section 4.
2.2 RELATED METHODS
Though starting from different intuitions, feature smoothing turns to be very similar with mixup (Zhang et al., 2018). In mixup, additional virtual data points are generated by interpolating both features and labels of the original training data:
x̃ = (1− α)x(i) + αx(j), ỹ = (1− α)y(i) + αy(j), where α ∈ [0, 1]. Mixup can be understood as a form of data augmentation that encourages the model to behave linearly in-between training examples. Zhang et al. (2018) argued that this linear behavior reduces the amount of undesirable oscillations when predicting data outside the training examples. On the contrary, our feature smoothing method includes the interpolations with new label remaining the same as the dominant data point, which maintains the S-shaped curve of logistic model and also allow feature smoothing easier to be combined with regularization methods. More detailed comparison can be found in Sec. 4.
Label smoothing (LaS) and logit squezzing (LoS) are other two efficient ways which improve the adversarial accuracy. Let y ∈ RK be one-hot label for K classes, label smoothing (Szegedy et al., 2016) softens the target by replacing y with
ỹ = δ
K − 1 (1− y) + (1− δ)y,
where δ = 0.1 is shown to be the best choice (Pereyra et al., 2017). Assume we train a model with parameters θ on a batch of m data points {(x(i),y(i)), i = 1, 2, . . . ,m}, y(i) ∈ {0, 1}K . Let f(x;θ) denote the mapping function from x to logits of the model. Let L(clean) denote the cross entropy loss for the batch of data points as:
L(clean) = − 1 m m∑ i=1 K∑ j=1 y (i) j log(pθ(y (i) j |x (i))).
The loss function of label smoothing can also be achieved by some calculation:
LLaS = − 1
m m∑ i=1 K∑ j=1 ỹ (i) j log(pθ(y (i) j |x (i))) = L(clean) − 1 m m∑ i=1 K∑ j=1 1−Ky(i)j K − 1 δfj(x (i),θ).
Notice that if we assume our model obtains a good estimation of f(x, θ), then when yj = 0, fj(x, θ) < 0 and when y = 1, fj(x, θ) > 0. In a binary classification case, LLaS can be written as L(clean) + δ|f(x, θ)|, which further indicates that label smoothing predicts logits with smaller magnitude and therefore avoids overfitting.
Similarly, logit squeezing (Kannan et al., 2018) applies a L2 norm on the logits directly as a penalty of over-confidence:
LLoS = L clean +
λ
m m∑ i=1 ||f(x(i))||2,
where Lclean is the original loss of neural networks and f(x(i)) is the logit of image x(i) as above.
Weight decay is another well known regularizer which efficiently reduces overfitting of neural networks by adding L1 or L2 penalty of weight w,
Lwd = L (clean) + λ||w||22.
However, weight decay is shown to be not very helpful for adversarial examples compared to label smoothing and logit squeezing, which will be discussed in Sec 4.
Combination of different approaches In feature smoothing and mixup, we generate new data points as linear combination of xi and xj . For generating the exact y value of these virtual points, mixup uses a linear interpolation for estimation, while feature smoothing chooses the dominant label. Nevertheless, it is also possible that feature smoothing or mixup also adds mislabeled noises into the training data, especially when xi and xj are not symmetric to the boundary. In that situation, label smoothing and logit squeezing are better ways to avoid overfitting. So we also consider to combine these methods together to gain a better test and adversarial accuracy.
3 EXPERIMENTS
3.1 RESULTS ON MNIST
We experiment feature smoothing, label smoothing, mix up, logit squeezing and their possible combinations on MNIST, with the results summarized in Table 1. We find that combining feature smoothing and logit squeezing give the best performance in both clean test data and adversarial examples. For all experiments in this section, we train our models for 200 epochs and use Adam for our optimizer with a learning rate at 10−4. Random noises are added into the training data in several methods, with the same σ value of 0.5.
For MNIST, when α ranges between 0.2 and 0.4 we observe similar performance for feature smoothing, whereas for large α at 0.5, too much noise in data label brings underfitting for feature smoothing. We use a final α value of 0.3 for reporting results in Table 1. Chosen by cross validation, we use α ∈ Beta(8, 8) for mixup, and δ = 0.1 for label smoothing. In logit squeezing, we use the weight λ of 0.2 as experimented in Kannan et al. (2018). In feature smoothing and mixup, 10 new data points are generated on each batch with batch size m = 50.
We use the LeNet model as Madry et al. (2017) and also apply the same attack parameters as they provided. After scaling the range of images pixels into [0, 1] (divided by 255), we apply perturbation per step of 0.01, 40 total attack steps with 1 random start, and the total adversarial perturbation threshold set as 0.3. Similar with Madry et al. (2017), we also generate black box examples for MNIST by independently initializing and training a copy of the LeNet model, then generate PGD attack based on that model. Both cross entropy loss and correct-wrong loss are used.
Each single method improves a small amount of the adversarial accuracy, but combinations of them lead to a much better performance (Table 1). Logit squeezing combined with feature smoothing and Gaussian random noise achieves the best performance among all those “efficient” methods.
3.2 RESULTS ON CIFAR10
We follow Madry’s lab for the experiments in CIFAR10. For all experiments in this section, we train our models for 80000 global steps with batch size m = 128 in each step. We use Momentum at 0.9 for our optimizer with a learning rate at 0.1 for the begining, 0.01 after 40000 global steps and 0.004 after 60000 global steps. Weight decay with λ = 0.0002 is also applied to all experiments. We use α ∈ Beta(8, 8) for mixup, δ = 0.1 for label smoothing, λ = 0.1 for logit squeezing , α = 0.2 for feature smoothing, and σ = 0.5 for Gaussian random noise. In feature smoothing and mixup, 10 new data points are randomly generated on each batch with batch size m = 128.
We apply the ResNet model and the same attack parameters as they used. We use perturbation per step of 2.0, 20 total attack steps with 1 random start and the total adversarial perturbation threshold set as 8. The black box adversarial examples are also generated by independently initializing and training a same model. Logit squeezing combined with feature smoothing and Gaussian random noise still performs the best among all of the ‘efficient’ methods (Table 2).
4 THEORETICAL EXPLANATIONS
In this section, we show that the above “efficient” methods increase neural networks’ adversarial robustness by improving the estimation of the decision boundaries. The improvement relies on two components: (1) unbiased estimation of boundary; (2) smaller estimation of variance. Given the same training data, these methods estimate the boundary closer to the true boundary than the original neural networks. Our simulations and theoretical results mainly focus on logistic regression. The idea is then discussed with deep convolutional neural networks.
To gain some intuitions on how the above methods improve the estimation, we start from logistic regression model with binary classes. Assume a feature vector x follows some distribution Px in Rd, w ∈ Rd and b ∈ R, then the corresponding label y follows a Bernoulli distribution with probabilities given by:
p := P (y = 1) = 1
1 + e−(wx+b) , P (y = 0) =
1
1 + ewx+b . (1)
Based on the changing the loss function, we divide these methods into two categories: (1) regularization methods: label smoothing (LaS), logit squeezing (LoS), and weight decay (wd); (2) augmentation methods: mixup and feature smoothing. Regularization methods add penalty term to loss function directly, while augmentation methods modify the loss function by adding new virtual data into it. We analyze the properties of these methods based on the two categories in the following subsections and the proofs of the theorems are included in Appendix.
4.1 REGULARIZATION METHODS
Our main theorem shows that all of the regularization methods estimate the decision boundary with smaller variance and the estimation is unbiased when x is symmetric with the boundary. With one-dimensional x and binary classes, the variance of decision boundary can be defined as: var( bw ), w 6= 0. Let p̂ denote the estimated probability. The confidence interval of p̂, which indicates the confidence interval of boundary, is narrowed down with the regularization methods, especially when the support of the distribution of x is far away from boundary (Figure 2(a)). As the value of w increases, the corresponding variances for w, b and decision boundary are also better controlled with regularization methods than the vanilla logistic regression (Figure. 3). For the vanilla logistic regression, when w is large enough, the variance of boundary grows in an exponential rate with w. But with these regularization methods, variance keeps decreasing even when w is really large. This observation is also true with higher dimensions and multiple classes (Figure 5). Inspired by our observation in the simulation study, we prove the following theorems in one dimension (1-D) to further explain the phenomena in the simulations.
Theorem 4.1. Label smoothing, logit squeezing, and weight decay all estimate the decision boundary with smaller variance in logistic regression model in 1-D.
Theorem 4.2. When x is symmetric with respect to boundary, label smoothing, logit squeezing, and weight decay have unbiased estimation of boundary in logistic regression model in 1-D.
The symmetric assumption is not unrealistic for imaging classification problems, since we can always assume the true boundary is in the middle of two classes. However, the stability of those methods when this assumption cannot hold is also important. We also show that label smoothing and logit squeezing is relatively more stable than weight decay when x is asymmetric (Figure 7).
4.2 AUGMENTATION METHODS
Other than adding regularization to the loss function directly, adversarial training, mixup and feature smoothing all ‘improve’ the loss function by changing the distribution of x. Figure 4 shows how the distribution of x influences the estimation of the boundary in different cases. It is natural to see that when the data are pushed closer to the true boundary, the boundary estimation becomes better due to reduced variances. Following the same analysis above, x being around boundary leads to smaller estimated p(1− p), which yields smaller variance for w, b, and the boundary. When the symmetrical assumption is violated, more careful selection of original data points is needed to avoid adding too much noise into training set.
Following the above explanation, our theorem 4.3 shows that adding data around boundary with labels generated from the true distribution into training can narrow down the variance of boundary, even though the sample size remains the same. Adversarial training, mixup and feature smoothing estimated the labels in different ways. We further show that feature smoothing achieve smaller variance than mixup when α is properly chosen (sec A.4).
Theorem 4.3. Adding data around boundary narrow down the variance of boundary estimation by making the distribution of x closer to boundary. The estimation is unbiased if all labels for the new data are balanced/correctly assigned.
4.3 EXTENSION TO NEURAL NETWORKS
In more complex models like convolution neural network (CNN), the model can be divided into two parts: hidden layers which transform the input data x→ f(x) and the classification model which applies the softmax function (or sigmoid function for binary classification) on f(x). Our results can be extended to CNN for regularization methods since softmax function is just a multi-classes logistic regression. For augmentation methods, we also believe that an interpolation of input data implies an interpolation of transformed data after hidden layers. For simplicity, we assume the nonlinear layers in the CNN only consist of ReLU and max-pooling. Obviously, both ReLU and max-pooling satisfy the following properties: let x̃ = αx(i) + (1− α)x(j), then
0 ≤ ReLU(x̃) ≤ αReLU(x(i)) + (1− α)ReLU(x(j)), 1 2 (αmax-p(x(i)) + (1− α)max-p(x(j))) ≤ max-p(x̃) ≤ αmax-p(x(i)) + (1− α)max-p(x(j)),
(2)
where max-p represents max-pooling. The first inequality in (2) holds when each argument of x(i), x(j) is non-negative. Given the pooling layer after the ReLU layer, the assumption is valid. It further implies that f(x̃) ≤ αf(x(i)) + (1− α)f(x(j)), which means augmentation methods on the data can be considered as augmentation on the logits. Then we may use our framework on the logistic regression.
5 DISCUSSION
We have proposed feature smoothing, a straightforward data augmentation method as an efficient way to increase adversarial robustness of neural networks. In our experiments, feature smoothing combined with logit squeezing shows the best performance in both MNIST and CIFAR10. We found that α ∈ [0.2, 0.4] shows similar results when we apply PGD attack with total perturbation threshold as e = 0.3. If we use smaller perturbations, smaller α, for example α = 0.1 for e = 0.1, we also observe good results. As a future plan, more possibilities of combinations of different techniques can still be further explored.
We also built a framework to explain how different regularization methods and augmentation methods improve the estimation of decision boundaries for logistic regression. Our main theorems show that all of these methods achieve smaller estimation variance of the decision boundary while keeping the unbiasedness of the estimation. In some extreme cases, for example, correctly labeled data around boundary for one specific class (7), the vanilla logistic regression is incorrectly estimated the boundary for sure but all of the above methods resolve the problem. We also extend the analysis to neural networks based on two facts: (1) the softmax regression is a generalized form of logistic regression in multi-class classification problem; (2)the activation functions like Relu and max-pooling can both keep linear inequalities Eq. (2).
ACKNOWLEDGEMENT
We would like to thank Dr. Jean-Marc Langlois and Dr. Alyssa Glass for their valuable inputs. We thank Dr. Harini Hannan for providing detailed explanation of her work, so we can successfully replicate experiment results from her paper. We also thank Weiqiang Shi for providing engineering support and Dr. Hua Guo for helpful feedback on drafts of this article.
A PROOFS
The proofs of Theorem 4.1∼ 4.3 are derived in this section. We first focus on binary logistic regression in the proofs.
A.1 PROOF THEOREM 4.1
Let ŵ and b̂ denote the estimates of w and b. The decreasing of variance is mainly achieved by two parts: (1) estimated ŵ and b̂ with smaller magnitude; (2) bias-variance trade-off. We first show that adding regularizers always producew with smaller magnitude, which lead to smaller variance. Then we show that the bias of p̂ introduced by penalties also leads to a smaller variance, essentially when p̂ is closer to 0.5 than the true p.
Based on the Fisher’s Information, when estimated parameters are MLE, the variances of ŵ and b̂ are given by:
var(ŵ) = (Exx 2[p̂(1− p̂)])−1, (3)
var(b̂) = (Ex[p̂(1− p̂)])−1. (4)
The decision boundary is {x : wx+b = 0}. So we use var(−b̂ŵ ) to measure the variance of estimation of boundary and by delta method
var(− b̂ ŵ ) = var(b̂) w2 +
b2
var(1/ŵ) + o(1).
Without loss of generality, we further assume b = 0 and w > 0. The variance of b̂/ŵ is then equal to
1
w2Ex[p̂(1− p̂)] .
If the distribution of x is a delta mass, i.e., Px = δx, the variance of b̂/ŵ can be further written as
g(w) = 1
w2p̂(1− p̂) ,
and the derivative with respect to w is
g′(w) = −2w − w2x(1− 2p̂)
w4p̂(1− p̂) .
Given our assumption that w > 0, it follows immediately that x(1 − 2p̂) < 0 and wx(1 − 2p̂) is monotonically decreasing. Moreover, as w → ∞, it yields wx(1 − 2p̂) → −∞. Therefore, there exists a constant C only depending on x so that for w > C we have −2w − w2x(1− 2p̂) > 0. We proved so far that if the estimation is MLE, the variance of boundary is increasing with w when w > C.
However, since we add one more regularization term to the original loss function, the estimator is no longer MLE. An approximation of var(b̂) is
Ey(y − p̂)2 p̂2(1− p̂)2 ,
where y ∼ Bernoulli(px). With regularization methods and based on our assumption of w, 0.5 < p̂ < px for x > 0 and px < p̂ < 0.5 for x < 0. Therefore
Ey(y − p̂)2
p̂2(1− p̂)2 ≤ 1 p̂(1− p̂) ,
which indicates a smaller variance of boundary than MLE estimation with the same ŵ and b̂.
A.2 PROOF OF THEOREM 4.2
When {x} is symmetric to the boundary x = 0, the data set can then be splitted into two groups, {xp} containing positive values and {xn} containing negative values, which are symmetric to each other. We further assume the corresponding labels are also approximately symmetric, which is easy to achieve when the sample size is large enough. The loss function is then automatically divided into: Lp = L({xp}) and Ln = L({xn}). The minimizer ŵ and b̂ of Lp and Ln have to be the same since the input data are symmetric. Then ŵ and b̂ are also the minimizers of the whole loss function L = Lp + Ln. That means for any x in positive part and its corresponding image x′ in negative part, the estimated pθ̂(y|x) + pθ̂(y
′|x′) is equal one, which indicates to an unbiased estimation of the boundary.
A.3 PROOF OF THEOREM 4.3
As mentioned in section 2, new data points
x̃ = αx(i) + (1− α)x(j),
are added as input in both methods, while the corresponding ỹ are estimated by either linear interpolation or dominant point of x(i) and x(j). Let p̃ denote the true probability of x̃, we claim that adding x̃ and ỹ ∼ Bernoulli(p̃) into input decreases the variance of boundary. Same as above, the variance of boundary can be estimated as
var(− b̂ ŵ ) =
1
w2p̂(1− p̂) ,
where x is generated from a distribution Px. When ỹ are generated from the true probability, the MLE estimation do not change. On the other hand, if x(i) and x(j) are from different classes and symmetric to the decision boundary, x̃ is closer to the boundary than x(i) and x(j), and therefore p̃(1− p̃) > px(i)(1− px(i)).
A.4 REMARK OF THEOREM 4.3
As mentioned above, given x from a delta mass distribution, the first derivative of variance with repect to w is given by:
g′(w) = −2w − w2x(1− 2p̂)
w4p̂(1− p̂) .
Without loss of generality, we still assume x > 0, b = 0, w > 0, then g′(w) = 0 gives w = 2x(1−2p̂) . g′(w) < 0 if w < 2x(1−2p̂) while g
′(w) > 0 otherwise. x̃ close to the boundary (0 when b = 0) also leads to probability p̃ close to 0.5, i.e. (1 - 2) close to 0. Therefore in a large range ofw ∈ (0, 2x(1−2p̂) ), the variance is decreasing with w. As a results, feature smoothing gives even smaller variance than the original MLE with x̃.
B SIMULATION RESULTS
B.1 LOGISTIC REGRESSION WITH HIGH-DIMENSIONAL FEATURE AND MULTIPLE CLASSES
B.2 IN-SYMMETRIC FEATURE
However, in real world, we can never have the perfect scenario that {x} is strictly symmetric distributed with respect to the boundary. We further argue that regularizers based on wx + b including label smoothing and logit squeezing are more tolerant to unbalanced data than weight decay regularized on w only. Figure 6 shows how these four methods perform when {x} are not symmetrically generated, in two scenarios: (a) data size is unbalanced with respect to decision boundary; (b) data distribution is unbalanced with respect to boundary. It is easy to see that label smoothing and logit squeezing are less sensitive to the distribution of x in both scenarios. In contrast, vanilla logistic regression and weight decay are more sensitive. Confidence intervals for vanilla
Estimated value and con dence interval of b/w in 3-d multiple classes
logistic regression become wider and do not behave consistently as x value changes; estimated mean decision boundary (p̂ = 0.5) for weight decay deviate from the true one, not as robust as other methods.
B.3 ANOTHER REALISTIC CASE
Now let us consider another data unbalance scenario under a different data generation mechanism and see how different methods perform. Note that all of our analysis above assumed that the data label y given data feature x were all generated from the true model 1. In other words, y are random numbers following Bernoulli (Multinomial for multi classes) distribution. Now let us consider another data generation mechanism which is also quite common in real world. Given input x, y is deterministic by an identity function y = I(wx+ b) instead of following a distribution.
But some classes have data around boundary and some do not, i.e. the distribution of x is unbalanced, for example most of x ∈ [−1,−0.9] ∪ [0.9, 1] but some x ∈ [0, 0.1]. Vanilla logistic regression fails to detect the true boundary in this case but both regularization methods and augmentation methods can improve the estimation (Figure 7). | 1. What is the main contribution of the paper, and how does it relate to previous works?
2. What are the strengths and weaknesses of the proposed feature smoothing technique?
3. How does the paper evaluate the effectiveness of the proposed method, and what are the results?
4. What are some potential limitations or challenges of the proposed approach, and how might they be addressed?
5. How does the paper compare the proposed method to other regularization techniques, such as Mixup and SMOTE?
6. Are there any open questions or areas for future research related to the paper's topic? | Review | Review
The paper proposes a feature smoothing technique, which generates virtual data points by interpolating the input space of two randomly sampled examples. The aim is to generate virtual training data points that are close to adversarial examples. Experimental results on both MNIST and Cifar10 datasets show that the proposed method augmented with other regularization techniques are robust to adversarial attacks and obtain higher accuracy when comparing with some testing baselines. Also, the paper presents some theoretical analyses showing that label smoothing, logit squeezing, weight decay, Mixup and feature smoothing all produce small estimated variance of the decision boundary when regularizing the networks.
The paper is generally well written, and the experiments show promising results. Nevertheless, the proposed method is not very novel, and the method is not comprehensively evaluated with experiments.
Major remarks:
1. The experiments show that feature smoothing has to combine with other regularizers in order to outperform other testing methods. In this sense the contribution of the feature smoothing along is not clear. For example, without integrating other regularizers, Mixup and feature smoothing obtain very close results for BlackBox-PGD, BlackBoxcw and Clean, as shown in Table 1. In addition, in the paper, the feature smoothing along is only validated on the MNIST (not even tested on Cifar10 in Table2). Consequently, it is difficult to evaluate the contribution of the proposed smoothing technique.
2. Experiments are conducted on datasets MNIST and Cifar10 with small number of target classes. Empirically, it would be useful to see how it performs on more complex data set such as Cifar100 or ImageNet.
3. The argument for why the proposed feature smoothing method works is presented in Theorem4.3 in Section 4.2, but the theorem seems to rely on the assumption that one can add data around the true decision boundary. However, how we can generate samples near the true decision boundary and how we should chose the mixing ratio to attain this goal is not clear to me in the paper. In addition, how we can sure that the adding synthetic data from one class does not collide with manifolds of other classes as suggested in AdaMixup (Guo et al., MixUp as Locally Linear Out-Of-Manifold Regularization)? This is particular relevant if the proposed feature smoothing strategy prefers to create virtual samples close to the true decision boundary.
4. At the end of page4, the authors claim that both feature smoothing and Mixup generate new data points that are closer to the true boundary. I wonder if the authors could further justify or show that either theoretically or experimentally.
5. The proposed method is similar to SMOTE (Chawla et al., SMOTE: Synthetic Minority Over-sampling Technique). In this sense, comparison with SMOTE would be very beneficial.
Minor remarks:
1. In the paper Mixup, value 1 was carefully chosen as the mixing policy Alpha for Cifar10 (otherwise, underfitting can easily occur as shown in AdaMixUp), and it seems in the paper the authors used a very large value of 8 for Mixup’s Beta distribution, and I did not see the justification for that number in the paper.
2. Typo in the second paragraph of page2: SHNV should be SVHN |
ICLR | Title
Meta-Q-Learning
Abstract
This paper introduces Meta-Q-Learning (MQL), a new off-policy algorithm for meta-Reinforcement Learning (meta-RL). MQL builds upon three simple ideas. First, we show that Q-learning is competitive with state-of-the-art meta-RL algorithms if given access to a context variable that is a representation of the past trajectory. Second, a multi-task objective to maximize the average reward across the training tasks is an effective method to meta-train RL policies. Third, past data from the meta-training replay buffer can be recycled to adapt the policy on a new task using off-policy updates. MQL draws upon ideas in propensity estimation to do so and thereby amplifies the amount of available data for adaptation. Experiments on standard continuous-control benchmarks suggest that MQL compares favorably with the state of the art in meta-RL. 1 I N T R O D U C T I O N
N/A
M E TA - Q - L E A R N I N G
Rasool Fakoor1, Pratik Chaudhari2∗, Stefano Soatto1, Alexander Smola1 1 Amazon Web Services 2 University of Pennsylvania Email: {fakoor, soattos, smola}@amazon.com, pratikac@seas.upenn.edu
A B S T R A C T
This paper introduces Meta-Q-Learning (MQL), a new off-policy algorithm for meta-Reinforcement Learning (meta-RL). MQL builds upon three simple ideas. First, we show that Q-learning is competitive with state-of-the-art meta-RL algorithms if given access to a context variable that is a representation of the past trajectory. Second, a multi-task objective to maximize the average reward across the training tasks is an effective method to meta-train RL policies. Third, past data from the meta-training replay buffer can be recycled to adapt the policy on a new task using off-policy updates. MQL draws upon ideas in propensity estimation to do so and thereby amplifies the amount of available data for adaptation. Experiments on standard continuous-control benchmarks suggest that MQL compares favorably with the state of the art in meta-RL.
1 I N T R O D U C T I O N
Reinforcement Learning (RL) algorithms have demonstrated good performance on simulated data. There are however two main challenges in translating this performance to real robots: (i) robots are complex and fragile which precludes extensive data collection, and (ii) a real robot may face an environment that is different than the simulated environment it was trained in. This has fueled research into MetaReinforcement Learning (meta-RL) which develops algorithms that “meta-train” on a large number of different environments, e.g., simulated ones, and aim to adapt to a new environment with few data.
How well does meta-RL work today? Fig. 1 shows the performance of two prototypical meta-RL algorithms on four standard continuous-control benchmarks.1 We compared them to the following simple baseline: an off-policy RL algorithm (TD3 by Fujimoto et al. (2018b)) and which was trained to maximize the average reward over all training tasks and modified to use a “context variable” that represents the trajectory. All algorithms in this figure use the same evaluation protocol. It is surprising that this
simple non-meta-learning-based method is competitive with state-of-the-art meta-RL algorithms. This is the first contribution of our paper: we demonstrate that it is not necessary to meta-train policies to do well on existing benchmarks.
Our second contribution is an off-policy meta-RL algorithm named Meta-Q-Learning (MQL) that builds upon the above result. MQL uses a simple meta-training procedure: it maximizes the average
∗Work done while at Amazon Web Services 1We obtained the numbers for MAML and PEARL from training logs published by Rakelly et al. (2019).
rewards across all meta-training tasks using off-policy updates to obtain
θ̂meta = arg max θ
1
n n∑ k=1 E τ∼Dk [ `k(θ) ] (1)
where `k(θ) is the objective evaluated on the transition τ obtained from the task Dk(θ), e.g., 1-step temporal-difference (TD) error would set `k(θ) = TD2(θ; τ). This objective, which we call the multi-task objective, is the simplest form of meta-training.
For adapting the policy to a new task, MQL samples transitions from the meta-training replay buffer that are similar to those from the new task. This amplifies the amount of data available for adaptation but it is difficult to do because of the large potential bias. We use techniques from the propensity estimation literature for performing this adaptation and the off-policy updates of MQL are crucial to doing so. The adaptation phase of MQL solves
arg max θ
{ E
τ∼Dnew
[ `new(θ) ] + E τ∼Dmeta [ β(τ ;Dnew,Dmeta) `new(θ) ] − ( 1− ÊSS ) ‖θ − θ̂meta‖22 } (2)
whereDmeta is the meta-training replay buffer, the propensity score β(τ ;Dnew,Dmeta) is the odds of a transition τ belonging to Dnew versusDmeta, and ÊSS is the Effective Sample Size between Dnew and Dmeta that is a measure of the similarly of the new task with the meta-training tasks. The first term computes off-policy updates on the new task, the second term performs β(·)-weighted off-policy updates on old data, while the third term is an automatically adapting proximal term that prevents degradation of the policy during adaptation.
We perform extensive experiments in Sec. 4.2 including ablation studies using standard meta-RL benchmarks that demonstrate that MQL policies obtain higher average returns on new tasks even if they are meta-trained for fewer time-steps than state-of-the-art algorithms.
2 B A C K G R O U N D
This section introduces notation and formalizes the meta-RL problem. We discuss techniques for estimating the importance ratio between two probability distributions in Sec. 2.2.
Consider a Markov Decision Processes (MDP) denoted by
xt+1 = f k(xt, ut, ξt) x0 ∼ pk0 , (3)
where xt ∈ X ⊂ Rd are the states and ut ∈ U ⊂ Rp are the actions. The dynamics fk is parameterized by k ∈ {1, . . . , n} where each k corresponds to a different task. The domain of all these tasks, X for the states and U for the actions, is the same. The distribution pk0 denotes the initial state distribution and ξt is the noise in the dynamics. Given a deterministic policy uθ(xt), the actionvalue function for γ-discounted future rewards rkt := r
k(xt, uθ(xt)) over an infinite time-horizon is
qk(x, u) = E ξ(·) [ ∞∑ t=0 γt rkt |x0 = x, u0 = u, ut = uθ(xt) ] . (4)
Note that we have assumed that different tasks have the same state and action space and may only differ in their dynamics fk and reward function rk. Given one task k ∈ {1, . . . , n}, the standard Reinforcement Learning (RL) formalism solves for
θ̂k = arg max θ `k(θ) where `k(θ) = E x∼p0
[ qk(x, uθ(x)) ] . (5)
Let us denote the dataset of all states, actions and rewards pertaining to a task k and policy uθ(x) by Dk(θ) = { xt, uθ(xt), r k, xt+1 = f k(xt, uθ(xt), ξt) } t≥0, x(0)∼pk0 , ξ(·) ;
we will often refer toDk as the “task” itself. The Deterministic Policy Gradient (DPG) algorithm (Silver et al., 2014) for solving (5) learns a ϕ-parameterized approximation qϕ to the optimal value func-
tion qk by minimizing the Bellman error and the optimal policy uθ that maximizes this approximation by solving the coupled optimization problem
ϕ̂k = arg min ϕ E τ∼Dk
[ ( qϕ(x, u)− rk − γ qϕ(x′, uθ̂k(x ′)) )2 ] ,
θ̂k = arg max θ E τ∼Dk
[ q ϕ̂k (x, uθ(x)) ] .
(6)
The 1-step temporal difference error (TD error) is defined as TD2(θ) = ( qϕ(x, u)− rk − γ qϕ(x′, uθ(x′)) )2 (7)
where we keep the dependence of TD(·) on ϕ implicit. DPG, or its deep network-based variant DDPG (Lillicrap et al., 2015), is an off-policy algorithm. This means that the expectations in (6) are computed using data that need not be generated by the policy being optimized (uθ), this data can come from some other policy.
In the sequel, we will focus on the parameters θ parameterizing the policy. The parameters ϕ of the value function are always updated to minimize the TD-error and are omitted for clarity.
2 . 1 M E TA - R E I N F O R C E M E N T L E A R N I N G ( M E TA - R L )
Meta-RL is a technique to learn an inductive bias that accelerates the learning of a new task by training on a large of number of training tasks. Formally, meta-training on tasks from the meta-training set Dmeta = { Dk } k=1,...,n involves learning a policy
θ̂meta = arg max θ
1
n n∑ k=1 `kmeta(θ) (8)
where `kmeta(θ) is a meta-training loss that depends on the particular method. Gradient-based meta-RL, let us take MAML by Finn et al. (2017) as a concrete example, sets
`kmeta(θ) = ` k(θ + α∇θ`k(θ)) (9)
for a step-size α > 0; `k(θ) is the objective of non-meta-RL (5). In this case `kmeta is the objective obtained on the task Dk after one (or in general, more) updates of the policy on the task. The idea behind this is that even if the policy θ̂meta does not perform well on all tasks in Dmeta it may be updated quickly on a new task Dnew to obtain a well-performing policy. This can either be done using the same procedure as that of meta-training time, i.e., by maximizing `newmeta(θ) with the policy θ̂meta as the initialization, or by some other adaptation procedure. The meta-training method and the adaptation method in meta-RL, and meta-learning in general, can be different from each other.
2 . 2 L O G I S T I C R E G R E S S I O N F O R E S T I M AT I N G T H E P R O P E N S I T Y S C O R E
Consider standard supervised learning: given two distributions q(x) (say, train) and p(x) (say, test), we would like to estimate how a model’s predictions ŷ(x) change across them. This is formally done using importance sampling:
E x∼p(x) E y|x
[ `(y, ŷ(x)) ] = E x∼q(x) E y|x [ β(x) `(y, ŷ(x)) ] ; (10)
where y|x are the true labels of data, the predictions of the model are ŷ(x) and `(y, ŷ(x)) is the loss for each datum (x, y). The importance ratio β(x) = dpdq (x), also known as the propensity score, is the Radon-Nikodym derivative (Resnick, 2013) of the two data densities and measures the odds of a sample x coming from the distribution p versus the distribution q. In practice, we do not know the densities q(x) and p(x) and therefore need to estimate β(x) using some finite data Xq = {x1, . . . , xm} drawn from q and Xp = {x′1, . . . , x′m} drawn from p. As Agarwal et al. (2011) show, this is easy to do using logistic regression. Set zk = 1 to be the labels for the data in Xq and zk = −1 to be the labels of the data in Xp for k ≤ m and fit a logistic classifier on the combined 2m
samples by solving
w∗ = min w
1
2m ∑ (x,z) log ( 1 + e−zw >x ) + c ‖w‖2. (11)
This gives
β(x) = P(z = −1|x) P(z = 1|x) = e−w ∗>x. (12)
Normalized Effective Sample Size (ÊSS): A related quantity to β(x) is the normalized Effective Sample Size (ÊSS) which we define as the relative number of samples from the target distribution p(x) required to obtain an estimator with performance (say, variance) equal to that of the importance sampling estimator (10). It is not possible to compute the ÊSS without knowing both densities q(x) and p(x) but there are many heuristics for estimating it. A popular one in the Monte Carlo literature (Kong, 1992; Smith, 2013; Elvira et al., 2018) is
ÊSS = 1
m
( ∑m k=1 β(xk))
2∑m k=1 β(xk) 2 ∈ [0, 1] (13)
where X = {x1, . . . , xm} is some finite batch of data. Observe that if two distributions q and p are close then the ÊSS is close to one; if they are far apart the ÊSS is close to zero.
3 M Q L
This section describes the MQL algorithm. We begin by describing the meta-training procedure of MQL including a discussion of multi-task training in Sec. 3.1. The adaptation procedure is described in Sec. 3.2.
3 . 1 M E TA - T R A I N I N G
MQL performs meta-training using the multi-task objective. Note that if one sets
`kmeta(θ) , ` k(θ) = E
x∼pk0
[ qk(x, uθ(x)) ] (14)
in (8) then the parameters θ̂meta are such that they maximize the average returns over all tasks from the meta-training set. We use an off-policy algorithm named TD3 (Fujimoto et al., 2018b) as the building block and solve for
θ̂meta = arg min θ
1
n n∑ k=1 E τ∼Dk [ TD2(θ) ] ; (15)
where TD(·) is defined in (7). As is standard in TD3, we use two action-value functions parameterized by ϕ1 and ϕ2 and take their minimum to compute the target in (7). This trick known as “doubleQ-learning” reduces the over-estimation bias. Let us emphasize that (14) is a special case of the procedure outlined in (8). The following remark explains why MQL uses the multi-task objective as opposed to the meta-training objective used, for instance, in existing gradient-based meta-RL algorithms.
Remark 1. Let us compare the critical points of the m-step MAML objective (9) to those of the multi-task objective which uses (14). As is done by the authors in Nichol et al. (2018), we can perform a Taylor series expansion around the parameters θ to obtain
∇`kmeta(θ) = ∇`k(θ) + 2α(m− 1) ( ∇2`k(θ) ) ∇`k(θ) +O(α2). (16)
Further, note that∇`kmeta in (16) is also the gradient of the loss
`k(θ) + α(m− 1)‖∇`k(θ)‖22 (17)
up to first order. This lends a new interpretation that MAML is attracted towards regions in the loss landscape that under-fit on individual tasks: parameters with large ‖∇`k‖2 will be far from the local maxima of `k(θ). The parameters α and m control this under-fitting. Larger the number of gradient steps, larger the under-fitting effect. This remark suggests that the adaptation speed of gradient-based meta-learning comes at the cost of under-fitting on the tasks.
3 . 1 . 1 D E S I G N I N G C O N T E X T
As discussed in Sec. 1 and 4.4, the identity of the task in meta-RL can be thought of as the hidden variable of an underlying partially-observable MDP. The optimal policy on the entire trajectory of the states, actions and the rewards. We therefore design a recurrent context variable zt that depends on {(xi, ui, ri)}i≤t. We set zt to the hidden state at time t of a Gated Recurrent Unit (GRU by Cho et al. (2014)) model. All the policies uθ(x) and value functions qϕ(x, u) in MQL are conditioned on the context and implemented as uθ(x, z) and qϕ(x, u, z). Any other recurrent model can be used to design the context; we used a GRU because it offers a good trade-off between a rich representation and computational complexity.
Remark 2 (MQL uses a deterministic context that is not permutation invariant). We have aimed for simplicity while designing the context. The context in MQL is built using an off-the-shelf model like GRU and is not permutation invariant. Indeed, the direction of time affords crucial information about the dynamics of a task to the agent, e.g., a Half-Cheetah running forward versus backward has arguably the same state trajectory but in a different order. Further, the context in MQL is a deterministic function of the trajectory. Both these aspects are different than the context used by Rakelly et al. (2019) who design an inference network and sample a probabilistic context conditioned on a moving window. RL algorithms are quite complex and challenging to reproduce. Current meta-RL techniques which build upon them further exacerbate this complexity. Our demonstration that a simple context variable is enough is an important contribution.
3 . 2 A D A P TAT I O N T O A N E W TA S K
We next discuss the adaptation procedure which adapts the meta-trained policy θ̂meta to a new task Dnew with few data. MQL optimizes the adaptation objective introduced in (2) into two steps.
1. Vanilla off-policy adaptation: The first step is to update the policy using the new data as
arg max θ
{ E
τ∼Dnew
[ `new(θ) ] − λ
2 ‖θ − θ̂meta‖22
} . (18)
The quadratic penalty ‖θ − θ̂meta‖2 keeps the parameters close to θ̂meta. This is crucial to reducing the variance of the model that is adapted using few data from the new task (Reddi et al., 2015). Off-policy learning is critical in this step because of its sample efficiency. We initialize θ to θ̂meta while solving (18).
2. Importance-ratio corrected off-policy updates: The second step of MQL exploits the metatraining replay buffer. Meta-training tasksDmeta are disjoint fromDnew but because they are expected to come from the same task distribution, transitions collected during meta-training can potentially be exploited to adapt the policy. This is difficult to do on two counts. First, the meta-training transitions do not come from Dnew. Second, even for transitions from the same task, it is non-trivial to update the policy because of extrapolation error (Fujimoto et al., 2018a): the value function has high error on states it has not seen before. Our use of the propensity score to reweigh transitions is a simpler version of the conditional generative model used by Fujimoto et al. (2018a) in this context.
MQL fits a logistic classifier on a mini-batch of transitions from the meta-training replay buffer and the transitions collected from the new task in step 1. The context variable zt is the feature for this classifier. The logistic classifier estimates the importance ratio β(τ ;Dnew,Dmeta) and can be used to reweigh data from the meta-training replay buffer for taking updates as
arg max θ
{ E
τ∼Dmeta
[ β(τ ;Dnew,Dmeta) `new(θ) ] − λ
2 ‖θ − θ̂meta‖22
} . (19)
We have again included a quadratic penalty ‖θ− θ̂meta‖2 that keeps the new parameters close to θ̂meta. Estimating the importance ratio involves solving a convex optimization problem on few samples (typically, 200 from the new task and 200-400 from the meta-training tasks). This classifier allows MQL to exploit the large amount of past data. In practice, we perform as many as 100× more weight updates using (19) than (18).
Remark 3 (Picking the coefficient λ). Following Fakoor et al. (2019), we pick
λ = 1− ÊSS
for both the steps (18–19). This relaxes the quadratic penalty if the new task is similar to the metatraining tasks (ÊSS is large) and vice-versa. While λ could be tuned as a hyper-parameter, our empirical results show that adapting it using ÊSS is a simple and effective heuristic.
Remark 4 (Details of estimating the importance ratio). It is crucial to ensure that the logistic classifier for estimating β generalizes well if we are to reweigh transitions in the meta-training replay buffer that are different than the ones the logistic was fitted upon. We do so in two ways: (i) the regularization co-efficient in (11) is chosen to be relatively large, that way we prefer false negatives than risk false positives; (ii) transitions with very high β are valuable for updating (19) but cause a large variance in stochastic gradient descent-based updates, we clip β before taking the update in (19). The clipping constant is a hyper-parameter and is given in Sec. 4.
MQL requires having access to the meta-training replay buffer during adaptation. This is not a debilitating requirement and there are a number of clustering techniques that can pick important transitions from the replay-buffer if a robotic agent is limited by available hard-disk space. The meta-training replay buffer is at most 3 GB for the experiments in Sec. 4.
4 E X P E R I M E N T S
This section presents the experimental results of MQL. We first discuss the setup and provide details the benchmark in Sec. 4.1. This is followed by empirical results and ablation experiments in Sec. 4.2.
4 . 1 S E T U P
Tasks and algorithms: We use the MuJoCo (Todorov et al., 2012) simulator with OpenAI Gym (Brockman et al., 2016) on continuous-control meta-RL benchmark tasks. These tasks have different rewards, randomized system parameters (Walker-2D-Params) and have been used in previous papers such as Finn et al. (2017); Rothfuss et al. (2018); Rakelly et al. (2019). We compare against standard baseline algorithms, namely MAML (TRPO (Schulman et al., 2015) variant) (Finn et al., 2017), RL2 (Duan et al., 2016), ProMP (Rothfuss et al., 2018) and PEARL (Rakelly et al., 2019). We obtained the training curves and hyper-parameters for all the three algorithms from the published code by Rakelly et al. (2019).
We will compare the above algorithms against: (i) vanilla TD3 (Fujimoto et al., 2018a) without any adaptation on new tasks, (ii) TD3-context: TD3 with GRU-based context Sec. 3.1.1 without any adaptation, and (iii) MQL: TD3 with context and adaptation on new task using the procedure in Sec. 3.2. All the three variants use
the multi-task objective for meta-training (15). We use Adam (Kingma & Ba, 2014) for optimizing all the loss functions in this paper.
Evaluation: Current meta-RL benchmarks lack a systematic evaluation procedure. 2 For each environment, Rakelly et al. (2019) constructed a fixed set of meta-training tasks (Dmeta) and a validation set of tasks Dnew that are disjoint from the meta-training set. To enable direct comparison with published empirical results, we closely followed the evaluation code of Rakelly et al. (2019) to create these tasks. We also use the exact same evaluation protocol as that of these authors, e.g., 200 timesteps of data from the new task, or the number of evaluation episodes. We report the undiscounted return on the validation tasks with statistics computed across 5 random seeds.
4 . 2 R E S U LT S
Our first result, in Fig. 2, is to show that vanilla off-policy learning with context, without any adaptation is competitive with state of the art meta-RL algorithms. We used a standard implementation of TD3 and train on the meta-training tasks using the multi-task objective (15). Hyper-parameters for these tasks are provided in Appendix D. This result is surprising and had gone unnoticed in the current literature. Policies that have access to the context can easily generalize to the validation tasks and achieve performance that is comparable to more sophisticated meta-RL algorithms.
We next evaluate MQL against existing meta-RL benchmarks on all environments. The results are shown in Fig. 3. We see that for all environments except Walker-2D-Params and Ant-Goal-2D, MQL obtains comparable or better returns on the validation tasks. In most cases, in particular for the challenging Humanoid-Direc-2D environment, MQL converges faster than existing algorithms. MAML and ProMP require about 100M time-steps to converge to returns that are significantly worse
2For instance, training and validation tasks are not explicitly disjoint in Finn et al. (2017); Rothfuss et al. (2018) and these algorithms may benefit during adaptation from having seen the same task before. The OpenAI Gym environments used in Finn et al. (2017); Rothfuss et al. (2018); Rakelly et al. (2019) provide different rewards for the same task. The evaluation protocol in existing papers, e.g., length of episode for a new task, amount of data available for adaptation from the new task, is not consistent. This makes reproducing experiments and comparing numerical results extremely difficult.
than the returns of off-policy algorithms like MQL and PEARL. Compare the training curve for TD3-context for the Ant-Goal-2D environment in Fig. 2 with that of the same environment in Fig. 3: the former shows a prominent dip in performance as meta-training progresses; this dip is absent in Fig. 3 and can be attributed to the adaptation phase of MQL.
4 . 3 A B L AT I O N E X P E R I M E N T S
We conduct a series of ablation studies to analyze the different components of the MQL algorithm. We use two environments for this purpose, namely Half-Cheetah-Fwd-Back and Ant-Fwd-Back. Fig. 4a shows that the adaptation in MQL in (18) and (19) improves performance. Also observe that MQL has a smaller standard deviation in the returns as compared to TD3-context which does not perform any adaptation; this can be seen as the adaptation phase making up for the lost performance of the meta-trained policy on a difficult task. Next, we evaluate the importance of the additional data from the replay buffer in MQL. Fig. 4b compares the performance of MQL with and without updates in (19). We see that the old data, even if it comes from different tasks, is useful to improve the performance on top of (18). Fig. 4c shows the effectiveness of setting λ = 1− ÊSS as compared to a fixed value of λ = 0.5. We see that modulating the quadratic penalty with ÊSS helps, the effect is minor for Sec. 4.3. The ideal value of λ depends on a given task and using 1 − ÊSS can help to adjust to different tasks without the need to do hyper-parameter search per task. Finally, Fig. 5 shows the evolution of λ and β(z) during meta-training. The coefficient λ is about 0.55 and β(z) is 0.8 for a large fraction of the time. The latter indicates that propensity score estimation is successful in sampling transitions from the meta-training replay buffer that are similar to the validation tasks. The value of λ remains relatively unchanged during training. This value indicates the fraction of transitions in the old data that are similar to those from the new tasks; since there are two distinct tasks in Ant-Fwd-Back, the value λ = 0.55 is appropriate.
4 . 4 R E L AT E D W O R K
Learning to learn: The idea of building an inductive bias for learning a new task by training on a large number of related tasks was established in a series of works (Utgoff, 1986; Schmidhuber, 1987; Baxter, 1995; Thrun, 1996; Thrun & Pratt, 2012). These papers propose building a base learner that fits on each task and a meta-learner that learns properties of the base learners to output a new base
learner for a new task. The recent literature instantiates this idea in two forms: (i) the meta-learner directly predicts the base-learner (Wang et al., 2016; Snell et al., 2017) and (ii) the meta-learner learns the updates of the base-learner (Bengio et al., 1992; Hochreiter et al., 2001; Finn et al., 2017).
Meta-training versus multi-task training: Metatraining aims to train a policy that can be adapted efficiently on a new task. Conceptually, the improved efficiency of a meta-learner comes from two things: (i) building a better inductive bias to initialize the learning (Schmidhuber et al., 1997; Baxter, 1995; 2000; Mitchell, 1980), or (ii) learning a better learning procedure (Bengio et al., 1997; Lee et al., 2019). The two notions of meta-learning above are complementary to each other and in fact, most recent literature using deep neural networks, e.g., MAML (Finn et al., 2017) and Prototypical Networks (Snell et al., 2017) confirms to the first notion of building a better inductive bias.
The multi-task training objective in MQL is the simplest possible instantiation of this idea: it maximizes the average reward on all tasks and learns a better prior without explicitly training for improving adaptation. This aspect of MQL coincides with a recent trend in meta-learning for image classification where it has been observed that modifications to episodic meta-training (Snell et al., 2017; Gidaris & Komodakis, 2018; Chen et al., 2018), or even foregoing meta-training completely (Dhillon et al., 2019) performs better. We speculate two reasons for this phenomenon: (i) meta-training methods are complex to implement and tune, and (ii) powerful function classes such as deep neural networks may have leftover capacity to adapt to a new task even if they are not explicitly trained for adaptation.
Context-based approaches: Both forms of meta-learning above have been employed relatively successfully for image classification (Snell et al., 2017; Ravi & Larochelle, 2016; Finn et al., 2017). It has however been difficult to replicate that empirical performance in RL: sensitivity to hyperparameters (Henderson et al., 2018) precludes directly predicting the base-learner while long-range temporal dependencies make it difficult to learn the updates of the base learner (Nichol et al., 2018). Recent methods for meta-RL instead leverage context and learn a policy that depends on just on the current state xt but on the previous history. This may be done in a recurrent fashion (Heess et al., 2015; Hausknecht & Stone, 2015) or by learning a latent representation of the task (Rakelly et al., 2019). Context is a powerful construct: as Fig. 1 shows, even a simple vanilla RL algorithm (TD3) when combined with context performs comparably to state-of-the-art meta-RL algorithms. However, context is a meta-training technique, it does not suggest a way to adapt a policy to a new task. For instance, Rakelly et al. (2019) do not update parameters of the policy on a new task. They rely on the latent representation of the context variable generalizing to new tasks. This is difficult if the new task is different from the training tasks; we discuss this further in Sec. 3.1.1.
Policy-gradient-based algorithms versus off-policy methods: Policy-gradient-based methods have high sample complexity (Ilyas et al., 2018). This is particularly limiting for meta-RL (Finn et al., 2017; Rothfuss et al., 2018; Houthooft et al., 2018) where one (i) trains on a large number of tasks and, (ii) aims to adapt to a new task with few data. Off-policy methods offer substantial gains in sample complexity. This motivates our use of off-policy updates for both meta-training and adaptation. Off-policy updates allow using past data from other policies. MQL exploits this substantially, it takes up to 100× more updates using old data than new data during adaptation. Off-policy algorithms are typically very sensitive to hyper-parameters (Fujimoto et al., 2018a) but we show that MQL is robust to such sensitivity because it adapts automatically to the distribution shift using the Effective Sample Size (ESS).
Propensity score estimation has been extensively studied in both statistics (Robert & Casella, 2013; Quionero-Candela et al., 2009) and RL (Dudı́k et al., 2011; Jiang & Li, 2015; Kang et al., 2007; Bang & Robins, 2005). It is typically used to reweigh data from the proposal distribution to compute estimators on the target distribution. MQL uses propensity scores in a novel way: we fit a propensity score estimator on a subset of the meta-training replay buffer and use this model to sample transitions
from the replay buffer that are similar to the new task. The off-policy updates in MQL are essential to exploiting this data. The coefficient of the proximal term in the adaptation-phase objective (18–19) using the effective sample size (ESS) is inspired from the recent work of Fakoor et al. (2019).
5 D I S C U S S I O N
The algorithm proposed in this paper, namely MQL, builds upon on three simple ideas. First, Qlearning with context is sufficient to be competitive on current meta-RL benchmarks. Second, maximizing the average reward of training tasks is an effective meta-learning technique. The meta-training phase of MQL is significantly simpler than that of existing algorithms and yet it achieves comparable performance to the state of the art. This suggests that we need to re-think meta-learning in the context of rich function approximators such as deep networks. Third, if one is to adapt to new tasks with few data, it is essential to exploit every available avenue. MQL recycles data from the meta-training replay buffer using propensity estimation techniques. This data is essentially free and is completely neglected by other algorithms. This idea can potentially be used in problems outside RL such as few-shot and zero-shot image classification.
Finally, this paper sheds light on the nature of benchmark environments in meta-RL. The fact that even vanilla Q-learning with a context variable—without meta-training and without any adaptation— is competitive with state of the art algorithms indicates that (i) training and validation tasks in the current meta-RL benchmarks are quite similar to each other and (ii) current benchmarks may be insufficient to evaluate meta-RL algorithms. Both of these are a call to action and point to the need to invest resources towards creating better benchmark problems for meta-RL that drive the innovation of new algorithms.
R E F E R E N C E S
Deepak Agarwal, Lihong Li, and Alexander Smola. Linear-time estimators for propensity scores. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 93–100, 2011.
Heejung Bang and James M Robins. Doubly robust estimation in missing data and causal inference models. Biometrics, 61(4):962–973, 2005.
Jonathan Baxter. Learning internal representations. Flinders University of S. Aust., 1995.
Jonathan Baxter. A model of inductive bias learning. Journal of artificial intelligence research, 12:149–198, 2000.
Samy Bengio, Yoshua Bengio, Jocelyn Cloutier, and Jan Gecsei. On the optimization of a synaptic learning rule. In Preprints Conf. Optimality in Artificial and Biological Neural Networks, pp. 6–8. Univ. of Texas, 1992.
Samy Bengio, Yoshua Bengio, Jocelyn Cloutier, and Jan Gecsei. On the optimization of a synaptic learning rule, 1997.
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym. arXiv:1606.01540, 2016.
Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, and Jia-Bin Huang. A closer look at few-shot classification. 2018.
Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv:1406.1078, 2014.
Guneet S Dhillon, Pratik Chaudhari, Avinash Ravichandran, and Stefano Soatto. A baseline for few-shot image classification. arXiv:1909.02729, 2019.
Yan Duan, John Schulman, Xi Chen, Peter L Bartlett, Ilya Sutskever, and Pieter Abbeel. Rl2: Fast reinforcement learning via slow reinforcement learning. arXiv:1611.02779, 2016.
Miroslav Dudı́k, John Langford, and Lihong Li. Doubly robust policy evaluation and learning. arXiv:1103.4601, 2011.
Vı́ctor Elvira, Luca Martino, and Christian P Robert. Rethinking the effective sample size. arXiv:1809.04129, 2018.
Rasool Fakoor, Pratik Chaudhari, and Alexander J Smola. P3o: Policy-on policy-off policy optimization. arXiv:1905.01756, 2019.
Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1126– 1135. JMLR. org, 2017.
Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. arXiv:1812.02900, 2018a.
Scott Fujimoto, Herke van Hoof, and Dave Meger. Addressing function approximation error in actor-critic methods. arXiv:1802.09477, 2018b.
Spyros Gidaris and Nikos Komodakis. Dynamic few-shot visual learning without forgetting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4367–4375, 2018.
Matthew Hausknecht and Peter Stone. Deep recurrent q-learning for partially observable mdps. In 2015 AAAI Fall Symposium Series, 2015.
Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, and David Silver. Memory-based control with recurrent neural networks. arXiv:1512.04455, 2015.
Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep reinforcement learning that matters. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
Sepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to learn using gradient descent. In International Conference on Artificial Neural Networks, pp. 87–94. Springer, 2001.
Rein Houthooft, Yuhua Chen, Phillip Isola, Bradly Stadie, Filip Wolski, OpenAI Jonathan Ho, and Pieter Abbeel. Evolved policy gradients. In Advances in Neural Information Processing Systems, pp. 5400–5409, 2018.
Andrew Ilyas, Logan Engstrom, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, and Aleksander Madry. Are deep policy gradient algorithms truly policy gradient algorithms? arXiv:1811.02553, 2018.
Nan Jiang and Lihong Li. Doubly robust off-policy value evaluation for reinforcement learning. arXiv:1511.03722, 2015.
Joseph DY Kang, Joseph L Schafer, et al. Demystifying double robustness: A comparison of alternative strategies for estimating a population mean from incomplete data. Statistical science, 22(4):523–539, 2007.
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014.
Augustine Kong. A note on importance sampling using standardized weights. University of Chicago, Dept. of Statistics, Tech. Rep, 348, 1992.
Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, and Stefano Soatto. Meta-learning with differentiable convex optimization. arXiv:1904.03758, 2019.
Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv:1509.02971, 2015.
Tom M Mitchell. The need for biases in learning generalizations. Department of Computer Science, Laboratory for Computer Science Research . . . , 1980.
Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms. arXiv:1803.02999, 2018.
Joaquin Quionero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D Lawrence. Dataset shift in machine learning. The MIT Press, 2009.
Kate Rakelly, Aurick Zhou, Deirdre Quillen, Chelsea Finn, and Sergey Levine. Efficient off-policy metareinforcement learning via probabilistic context variables. arXiv:1903.08254, 2019.
Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. 2016.
Sashank J. Reddi, Barnabás Póczos, and Alexander J. Smola. Doubly robust covariate shift correction. In AAAI, 2015.
Sidney I Resnick. A probability path. Springer Science & Business Media, 2013.
Christian Robert and George Casella. Monte Carlo statistical methods. Springer Science & Business Media, 2013.
Jonas Rothfuss, Dennis Lee, Ignasi Clavera, Tamim Asfour, and Pieter Abbeel. Promp: Proximal meta-policy search. arXiv:1810.06784, 2018.
Jurgen Schmidhuber. Evolutionary principles in self-referential learning. On learning how to learn: The meta-meta-... hook.) Diploma thesis, Institut f. Informatik, Tech. Univ. Munich, 1987.
Jürgen Schmidhuber, Jieyu Zhao, and Marco Wiering. Shifting inductive bias with success-story algorithm, adaptive levin search, and incremental self-improvement. Machine Learning, 28(1):105–130, Jul 1997. ISSN 1573-0565.
John Schulman, Sergey Levine, Pieter Abbeel, Michael I Jordan, and Philipp Moritz. Trust region policy optimization. In International Conference on Machine Learning, volume 37, pp. 1889–1897, 2015.
David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In International Conference on Machine Learning, 2014.
Adrian Smith. Sequential Monte Carlo methods in practice. Springer Science & Business Media, 2013.
Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pp. 4077–4087, 2017.
Sebastian Thrun. Is learning the n-th thing any easier than learning the first? In Advances in neural information processing systems, pp. 640–646, 1996.
Sebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 2012.
Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033. IEEE, 2012.
Paul E Utgoff. Shift of bias for inductive concept learning. Machine learning: An artificial intelligence approach, 2:107–148, 1986.
Jane X. Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z. Leibo, Rémi Munos, Charles Blundell, Dharshan Kumaran, and Matthew Botvinick. Learning to reinforcement learn. CoRR, abs/1611.05763, 2016. URL http://arxiv.org/abs/1611.05763.
A P S E U D O - C O D E
The pseudo-code for MQL during training and adaption are given in Algorithm 1 and Algorithm 2. After MQL is trained for a given environment as described in Algorithm 1, it returns the meta-trained policy θ and replay buffer containing train tasks.
Next, Algorithm 2 runs the adaptation procedure which adapts the meta-trained policy to a test taskD with few data. To do so, MQL optimizes the adaptation objective into two steps. After gathering data from a test task D, MQL first updates the policy using the new data (line 4). MQL then fits a logistic classifier on a mini-batch of transitions from the meta-training replay buffer and the transitions collected from the test task and then estimates ÊSS (lines 5-6). Finally, the adaptation step runs for n iterations (lines 7 - 10) in which MQL can exploit past data in which it uses propensity score to decide whether or not a given sample is related to the current test task.
Algorithm 1: MQL - Meta-training Input: Set of training tasks Dmeta
1 Initialize the replay buffer 2 Initialize parameters θ of an off-policy method, e.g., TD3 3 while not done do 4 // Rollout and update policy 5 Sample a task D ∼ Dmeta 6 Gather data from task D using policy πθ while feeding transitions through context GRU. Add trajectory to the replay buffer. 7 b ← Sample mini-batch from buffer 8 Update parameters θ using mini-batch b and Eqn. (15)
9 θmeta ← θ 10 return θmeta , replay buffer
Algorithm 2: MQL - Adaptation Input: Test task D, meta-training replay buffer, meta-trained policy θmeta
1 Initialize temporary buffer buf 2 θ ← θmeta 3 buf ← Gather data from D using πθmeta 4 Update Eqn. (18) using buf 5 Fit β(D) using buf and meta-training replay buffer using Eqn. (12)
6 Estimate ÊSS using β(D) using Eqn. (13) 7 for i ≤ n do 8 b ← sample mini-batch from meta-training replay buffer 9 Calculate β for b
10 Update θ using Eqn. (19)
11 Evaluate θ on a new rollout from task D 12 return θ
B O U T- O F - D I S T R I B U T I O N TA S K S
MQL is designed for explicitly using data from the new task along with off-policy data from old, possibly very different tasks. This is on account of two things: (i) the loss function of MQL does not use the old data if it is very different from the new task, β is close to zero for all samples, and (ii) the first term in (18) makes multiple updates using data from the new task. To explore this aspect, we create an out-of-distribution task using the “Half-Cheetah-Vel” environment wherein we use disjoint sets of velocities for meta-training and testing. The setup is as follows:
• Half-Cheetah-Vel-OOD-Medium: target velocity for a training task is sampled uniformly randomly from [0, 2.5] while that for test task is sampled uniformly randomly from [2.5, 3.0].
This is what we call “medium” hardness task because although the distributions of train and test velocities is disjoint, they are close to each other.
• Half-Cheetah-Vel-OOD-Hard: target velocity for a training task is sampled uniformly randomly from [0, 1.5] while that for test task is sampled uniformly randomly from [2.5, 3.0]. This is a “hard” task because the distributions of train and test velocities are far away from each other.
Fig. 6a shows that MQL significantly outperforms PEARL when the train and test target velocities come from disjoint sets. We used the published code of PEARL (Rakelly et al., 2019) for this experiment. This shows that the adaptation in MQL is crucial to generalizing to new situations which are not a part of the meta-training process. Fig. 6b shows the evolution of the proximal penalty coefficient λ and the propensity score β(z) during meta-training for the medium-hard task. We see that λ ≈ 0.8 while β(z) ≈ 0.2 throughout training. This indicates that MQL automatically adjusts its test-time adaptation to use only few samples in (19) if the test task provides transitions quite different than those in the replay buffer.
We next discuss results on the harder task Half-Cheetah-Vel-OOD-Hard. There is a very large gap between training and test target velocities in this case. Fig. 7a shows the comparison with the same test protocol as the other experiments in this paper. In particular, we collect 200 time-steps from the new task and use it for adaptation in both MQL and TD3-context. Since this task is particularly hard, we also ran an experiment where 1200 time-steps (6 episodes) are given to the two algorithms for adaptation. The results are shown in Fig. 7b. In both cases, we see that MQL is better than TD3-context by a large margin (the standard deviation on these plots is high because the environment is hard). Note that since we re-initialize the hidden state of the context network at the beginning of each episode, TD3-context cannot take advantage of the extra time-steps. MQL on the other hand updates the policy explicitly and can take advantage of this extra data.
For sake of being thorough, we collected 800 time-steps from the new task from the same episode, the results are shown in Fig. 8a. We again notice that MQL results in slightly higher rewards than TD3-context in spite of the fact that both the algorithms suffer large degradation in performance as compared to Figs. 7a and 7b.
Figs. 7c, 7d and 8b show that the proximal penalty coefficient λ ≈ 1 and the propensity score β(z) ≈ 0 for a large fraction of training. This proves that MQL is able to automatically discard samples unrelated to the new test during the adaptation phase.
C M O R E A B L AT I O N S T U D I E S
We conduct a series of additional ablation studies to analyze the different components of the MQL algorithm. We use two environments for this purpose, namely Half-Cheetah-Vel and Walker-2DParams. Fig. 9 and Fig. 10 show the result of these experiments. These experiments show that adaptation phase is more useful for Half-Cheetah-Vel than Walker-2D-Params as test and training tasks are very similar in Walker-2D-Params which helps TD3-context achieves strong performance that leaves no window for improvement with adaptation.
D H Y P E R - PA R A M E T E R S A N D M O R E D E TA I L S O F T H E E M P I R I C A L R E S U LT S | 1. What is the focus of the paper regarding reinforcement learning?
2. What are the strengths of the proposed approach, particularly in sample efficiency and adaptation schemes?
3. Are there any concerns regarding the novelty and uniqueness of the proposed method compared to prior works?
4. How does the reviewer assess the clarity, quality, and relevance of the paper's content?
5. What are the weaknesses of the paper, especially in the presentation of the new approach as a meta-learning method? | Review | Review
The authors investigate meta-learning in reinforcement learning with respect to sample efficiency and the necessity of meta-learning an adaptation scheme. Based on their findings, they propose a new algorithm 'MQL' (Meta-Q-Learning) that is off-policy and has a fixed adaptation scheme but is still competitive on meta-RL benchmarks (a distribution of environments that differ slightly in their reward functions).
They motivate the paper by data-inefficiency of current meta-learning approaches and empirical results suggesting that meta-learning the adaptation scheme is less important than feature reuse.
On the other hand, their introduction section would benefit from additional references to the kind of meta-learning they describe. In particular, their so-called "definition of meta-learning" is mostly about domain randomization (e.g. Tobin et al. 2017 https://arxiv.org/abs/1703.06907) and not about the broader 'learning to learn' RL methodology (in particular Schmidhuber 1994 "On learning how to learn learning strategies").
**The authors make the following contributions:**
1. They show that Q-Learning trained on multiple tasks with a context variable as an input (an RNN state summarizing previous transitions) is competitive to related work when evaluated on a test task even though no adaptation is performed
2. Based on these observations, they introduce a new method for off-policy RL that does not directly optimize for adaptation but instead uses a fixed adaptation scheme
3. The new method leverages data during meta-testing that was collected during meta-training using importance weights for increased sample efficiency
**Overall, we believe the contributions are significant and sufficiently empirically justified.**
There are strong similarities, however, to parallel work on analyzing whether MAML relies on feature reuse or rapid learning (Raghu et al. 2019 https://arxiv.org/abs/1909.09157).
This work and the present submission conclude that feature reuse is much more significant than meta-learning an adaptation scheme when evaluated on current meta-RL benchmarks. This is a significant result and supports the new method developed in this paper.
During meta-training, their proposed method maximizes only the average return across tasks, not the ability to adapt from the resulting parameters.
Their method introduces a fixed (non-learned) adaptation scheme that performs favorably compared to certain methods from the existing meta-learning literature and demonstrates that even dropping this adaptation still does well.
There are strong similarities to Nichol et al. 2018 (https://arxiv.org/abs/1803.02999). We encourage the authors to relate this work to Raghu et al. 2019 and Nichol et al. 2018.
**Despite these interesting results, we strongly disagree with the meta-learning narrative of their new method.**
Because the adaptation scheme is no longer optimized directly, instead a fixed adaptation scheme is assumed, hence the approach in this paper is no longer a meta-learning algorithm.
Instead, this method has strong similarities with transfer-learning and domain adaptation (first training on one distribution of tasks, then fine-tuning on another task).
The authors should discuss the links to these fields of research, and clarify what's really novel, already in the abstract.
For example, on page 2 the authors claim that optimizing the multi-task objective (the mean error across tasks) is the simplest form of meta-learning. This objective, however, is NOT meta-learning.
**Decision.**
The submission contains strong empirical results emphasizing the significance of feature reuse and the insignificance of learned adaptation on the tested meta-RL benchmarks.
The reuse of experience from meta-training during meta-testing by employing importance weights is also an interesting contribution.
In contrast, we are not satisfied with the presentation of their new approach as a meta-learning approach. This method should be introduced along the lines of: 'Transfer Learning / Feature Reuse in RL is competitive to meta-learning across similar tasks'.
In its current form, we tend to reject the paper because it further obscures what the term meta-learning refers to. The authors are confusing it with more limited transfer learning.
Additionally, it was not clear to us whether the quadratic penalty they add to their adaptation scheme is only empirically valid or whether there is a theoretical reason.
For now, we'd lean towards rejecting this submission, but we might change our minds, provided the comments above were addressed in a satisfactory way - let us wait for the rebuttal.
Edit after rebuttal: score increased! |
ICLR | Title
Meta-Q-Learning
Abstract
This paper introduces Meta-Q-Learning (MQL), a new off-policy algorithm for meta-Reinforcement Learning (meta-RL). MQL builds upon three simple ideas. First, we show that Q-learning is competitive with state-of-the-art meta-RL algorithms if given access to a context variable that is a representation of the past trajectory. Second, a multi-task objective to maximize the average reward across the training tasks is an effective method to meta-train RL policies. Third, past data from the meta-training replay buffer can be recycled to adapt the policy on a new task using off-policy updates. MQL draws upon ideas in propensity estimation to do so and thereby amplifies the amount of available data for adaptation. Experiments on standard continuous-control benchmarks suggest that MQL compares favorably with the state of the art in meta-RL. 1 I N T R O D U C T I O N
N/A
M E TA - Q - L E A R N I N G
Rasool Fakoor1, Pratik Chaudhari2∗, Stefano Soatto1, Alexander Smola1 1 Amazon Web Services 2 University of Pennsylvania Email: {fakoor, soattos, smola}@amazon.com, pratikac@seas.upenn.edu
A B S T R A C T
This paper introduces Meta-Q-Learning (MQL), a new off-policy algorithm for meta-Reinforcement Learning (meta-RL). MQL builds upon three simple ideas. First, we show that Q-learning is competitive with state-of-the-art meta-RL algorithms if given access to a context variable that is a representation of the past trajectory. Second, a multi-task objective to maximize the average reward across the training tasks is an effective method to meta-train RL policies. Third, past data from the meta-training replay buffer can be recycled to adapt the policy on a new task using off-policy updates. MQL draws upon ideas in propensity estimation to do so and thereby amplifies the amount of available data for adaptation. Experiments on standard continuous-control benchmarks suggest that MQL compares favorably with the state of the art in meta-RL.
1 I N T R O D U C T I O N
Reinforcement Learning (RL) algorithms have demonstrated good performance on simulated data. There are however two main challenges in translating this performance to real robots: (i) robots are complex and fragile which precludes extensive data collection, and (ii) a real robot may face an environment that is different than the simulated environment it was trained in. This has fueled research into MetaReinforcement Learning (meta-RL) which develops algorithms that “meta-train” on a large number of different environments, e.g., simulated ones, and aim to adapt to a new environment with few data.
How well does meta-RL work today? Fig. 1 shows the performance of two prototypical meta-RL algorithms on four standard continuous-control benchmarks.1 We compared them to the following simple baseline: an off-policy RL algorithm (TD3 by Fujimoto et al. (2018b)) and which was trained to maximize the average reward over all training tasks and modified to use a “context variable” that represents the trajectory. All algorithms in this figure use the same evaluation protocol. It is surprising that this
simple non-meta-learning-based method is competitive with state-of-the-art meta-RL algorithms. This is the first contribution of our paper: we demonstrate that it is not necessary to meta-train policies to do well on existing benchmarks.
Our second contribution is an off-policy meta-RL algorithm named Meta-Q-Learning (MQL) that builds upon the above result. MQL uses a simple meta-training procedure: it maximizes the average
∗Work done while at Amazon Web Services 1We obtained the numbers for MAML and PEARL from training logs published by Rakelly et al. (2019).
rewards across all meta-training tasks using off-policy updates to obtain
θ̂meta = arg max θ
1
n n∑ k=1 E τ∼Dk [ `k(θ) ] (1)
where `k(θ) is the objective evaluated on the transition τ obtained from the task Dk(θ), e.g., 1-step temporal-difference (TD) error would set `k(θ) = TD2(θ; τ). This objective, which we call the multi-task objective, is the simplest form of meta-training.
For adapting the policy to a new task, MQL samples transitions from the meta-training replay buffer that are similar to those from the new task. This amplifies the amount of data available for adaptation but it is difficult to do because of the large potential bias. We use techniques from the propensity estimation literature for performing this adaptation and the off-policy updates of MQL are crucial to doing so. The adaptation phase of MQL solves
arg max θ
{ E
τ∼Dnew
[ `new(θ) ] + E τ∼Dmeta [ β(τ ;Dnew,Dmeta) `new(θ) ] − ( 1− ÊSS ) ‖θ − θ̂meta‖22 } (2)
whereDmeta is the meta-training replay buffer, the propensity score β(τ ;Dnew,Dmeta) is the odds of a transition τ belonging to Dnew versusDmeta, and ÊSS is the Effective Sample Size between Dnew and Dmeta that is a measure of the similarly of the new task with the meta-training tasks. The first term computes off-policy updates on the new task, the second term performs β(·)-weighted off-policy updates on old data, while the third term is an automatically adapting proximal term that prevents degradation of the policy during adaptation.
We perform extensive experiments in Sec. 4.2 including ablation studies using standard meta-RL benchmarks that demonstrate that MQL policies obtain higher average returns on new tasks even if they are meta-trained for fewer time-steps than state-of-the-art algorithms.
2 B A C K G R O U N D
This section introduces notation and formalizes the meta-RL problem. We discuss techniques for estimating the importance ratio between two probability distributions in Sec. 2.2.
Consider a Markov Decision Processes (MDP) denoted by
xt+1 = f k(xt, ut, ξt) x0 ∼ pk0 , (3)
where xt ∈ X ⊂ Rd are the states and ut ∈ U ⊂ Rp are the actions. The dynamics fk is parameterized by k ∈ {1, . . . , n} where each k corresponds to a different task. The domain of all these tasks, X for the states and U for the actions, is the same. The distribution pk0 denotes the initial state distribution and ξt is the noise in the dynamics. Given a deterministic policy uθ(xt), the actionvalue function for γ-discounted future rewards rkt := r
k(xt, uθ(xt)) over an infinite time-horizon is
qk(x, u) = E ξ(·) [ ∞∑ t=0 γt rkt |x0 = x, u0 = u, ut = uθ(xt) ] . (4)
Note that we have assumed that different tasks have the same state and action space and may only differ in their dynamics fk and reward function rk. Given one task k ∈ {1, . . . , n}, the standard Reinforcement Learning (RL) formalism solves for
θ̂k = arg max θ `k(θ) where `k(θ) = E x∼p0
[ qk(x, uθ(x)) ] . (5)
Let us denote the dataset of all states, actions and rewards pertaining to a task k and policy uθ(x) by Dk(θ) = { xt, uθ(xt), r k, xt+1 = f k(xt, uθ(xt), ξt) } t≥0, x(0)∼pk0 , ξ(·) ;
we will often refer toDk as the “task” itself. The Deterministic Policy Gradient (DPG) algorithm (Silver et al., 2014) for solving (5) learns a ϕ-parameterized approximation qϕ to the optimal value func-
tion qk by minimizing the Bellman error and the optimal policy uθ that maximizes this approximation by solving the coupled optimization problem
ϕ̂k = arg min ϕ E τ∼Dk
[ ( qϕ(x, u)− rk − γ qϕ(x′, uθ̂k(x ′)) )2 ] ,
θ̂k = arg max θ E τ∼Dk
[ q ϕ̂k (x, uθ(x)) ] .
(6)
The 1-step temporal difference error (TD error) is defined as TD2(θ) = ( qϕ(x, u)− rk − γ qϕ(x′, uθ(x′)) )2 (7)
where we keep the dependence of TD(·) on ϕ implicit. DPG, or its deep network-based variant DDPG (Lillicrap et al., 2015), is an off-policy algorithm. This means that the expectations in (6) are computed using data that need not be generated by the policy being optimized (uθ), this data can come from some other policy.
In the sequel, we will focus on the parameters θ parameterizing the policy. The parameters ϕ of the value function are always updated to minimize the TD-error and are omitted for clarity.
2 . 1 M E TA - R E I N F O R C E M E N T L E A R N I N G ( M E TA - R L )
Meta-RL is a technique to learn an inductive bias that accelerates the learning of a new task by training on a large of number of training tasks. Formally, meta-training on tasks from the meta-training set Dmeta = { Dk } k=1,...,n involves learning a policy
θ̂meta = arg max θ
1
n n∑ k=1 `kmeta(θ) (8)
where `kmeta(θ) is a meta-training loss that depends on the particular method. Gradient-based meta-RL, let us take MAML by Finn et al. (2017) as a concrete example, sets
`kmeta(θ) = ` k(θ + α∇θ`k(θ)) (9)
for a step-size α > 0; `k(θ) is the objective of non-meta-RL (5). In this case `kmeta is the objective obtained on the task Dk after one (or in general, more) updates of the policy on the task. The idea behind this is that even if the policy θ̂meta does not perform well on all tasks in Dmeta it may be updated quickly on a new task Dnew to obtain a well-performing policy. This can either be done using the same procedure as that of meta-training time, i.e., by maximizing `newmeta(θ) with the policy θ̂meta as the initialization, or by some other adaptation procedure. The meta-training method and the adaptation method in meta-RL, and meta-learning in general, can be different from each other.
2 . 2 L O G I S T I C R E G R E S S I O N F O R E S T I M AT I N G T H E P R O P E N S I T Y S C O R E
Consider standard supervised learning: given two distributions q(x) (say, train) and p(x) (say, test), we would like to estimate how a model’s predictions ŷ(x) change across them. This is formally done using importance sampling:
E x∼p(x) E y|x
[ `(y, ŷ(x)) ] = E x∼q(x) E y|x [ β(x) `(y, ŷ(x)) ] ; (10)
where y|x are the true labels of data, the predictions of the model are ŷ(x) and `(y, ŷ(x)) is the loss for each datum (x, y). The importance ratio β(x) = dpdq (x), also known as the propensity score, is the Radon-Nikodym derivative (Resnick, 2013) of the two data densities and measures the odds of a sample x coming from the distribution p versus the distribution q. In practice, we do not know the densities q(x) and p(x) and therefore need to estimate β(x) using some finite data Xq = {x1, . . . , xm} drawn from q and Xp = {x′1, . . . , x′m} drawn from p. As Agarwal et al. (2011) show, this is easy to do using logistic regression. Set zk = 1 to be the labels for the data in Xq and zk = −1 to be the labels of the data in Xp for k ≤ m and fit a logistic classifier on the combined 2m
samples by solving
w∗ = min w
1
2m ∑ (x,z) log ( 1 + e−zw >x ) + c ‖w‖2. (11)
This gives
β(x) = P(z = −1|x) P(z = 1|x) = e−w ∗>x. (12)
Normalized Effective Sample Size (ÊSS): A related quantity to β(x) is the normalized Effective Sample Size (ÊSS) which we define as the relative number of samples from the target distribution p(x) required to obtain an estimator with performance (say, variance) equal to that of the importance sampling estimator (10). It is not possible to compute the ÊSS without knowing both densities q(x) and p(x) but there are many heuristics for estimating it. A popular one in the Monte Carlo literature (Kong, 1992; Smith, 2013; Elvira et al., 2018) is
ÊSS = 1
m
( ∑m k=1 β(xk))
2∑m k=1 β(xk) 2 ∈ [0, 1] (13)
where X = {x1, . . . , xm} is some finite batch of data. Observe that if two distributions q and p are close then the ÊSS is close to one; if they are far apart the ÊSS is close to zero.
3 M Q L
This section describes the MQL algorithm. We begin by describing the meta-training procedure of MQL including a discussion of multi-task training in Sec. 3.1. The adaptation procedure is described in Sec. 3.2.
3 . 1 M E TA - T R A I N I N G
MQL performs meta-training using the multi-task objective. Note that if one sets
`kmeta(θ) , ` k(θ) = E
x∼pk0
[ qk(x, uθ(x)) ] (14)
in (8) then the parameters θ̂meta are such that they maximize the average returns over all tasks from the meta-training set. We use an off-policy algorithm named TD3 (Fujimoto et al., 2018b) as the building block and solve for
θ̂meta = arg min θ
1
n n∑ k=1 E τ∼Dk [ TD2(θ) ] ; (15)
where TD(·) is defined in (7). As is standard in TD3, we use two action-value functions parameterized by ϕ1 and ϕ2 and take their minimum to compute the target in (7). This trick known as “doubleQ-learning” reduces the over-estimation bias. Let us emphasize that (14) is a special case of the procedure outlined in (8). The following remark explains why MQL uses the multi-task objective as opposed to the meta-training objective used, for instance, in existing gradient-based meta-RL algorithms.
Remark 1. Let us compare the critical points of the m-step MAML objective (9) to those of the multi-task objective which uses (14). As is done by the authors in Nichol et al. (2018), we can perform a Taylor series expansion around the parameters θ to obtain
∇`kmeta(θ) = ∇`k(θ) + 2α(m− 1) ( ∇2`k(θ) ) ∇`k(θ) +O(α2). (16)
Further, note that∇`kmeta in (16) is also the gradient of the loss
`k(θ) + α(m− 1)‖∇`k(θ)‖22 (17)
up to first order. This lends a new interpretation that MAML is attracted towards regions in the loss landscape that under-fit on individual tasks: parameters with large ‖∇`k‖2 will be far from the local maxima of `k(θ). The parameters α and m control this under-fitting. Larger the number of gradient steps, larger the under-fitting effect. This remark suggests that the adaptation speed of gradient-based meta-learning comes at the cost of under-fitting on the tasks.
3 . 1 . 1 D E S I G N I N G C O N T E X T
As discussed in Sec. 1 and 4.4, the identity of the task in meta-RL can be thought of as the hidden variable of an underlying partially-observable MDP. The optimal policy on the entire trajectory of the states, actions and the rewards. We therefore design a recurrent context variable zt that depends on {(xi, ui, ri)}i≤t. We set zt to the hidden state at time t of a Gated Recurrent Unit (GRU by Cho et al. (2014)) model. All the policies uθ(x) and value functions qϕ(x, u) in MQL are conditioned on the context and implemented as uθ(x, z) and qϕ(x, u, z). Any other recurrent model can be used to design the context; we used a GRU because it offers a good trade-off between a rich representation and computational complexity.
Remark 2 (MQL uses a deterministic context that is not permutation invariant). We have aimed for simplicity while designing the context. The context in MQL is built using an off-the-shelf model like GRU and is not permutation invariant. Indeed, the direction of time affords crucial information about the dynamics of a task to the agent, e.g., a Half-Cheetah running forward versus backward has arguably the same state trajectory but in a different order. Further, the context in MQL is a deterministic function of the trajectory. Both these aspects are different than the context used by Rakelly et al. (2019) who design an inference network and sample a probabilistic context conditioned on a moving window. RL algorithms are quite complex and challenging to reproduce. Current meta-RL techniques which build upon them further exacerbate this complexity. Our demonstration that a simple context variable is enough is an important contribution.
3 . 2 A D A P TAT I O N T O A N E W TA S K
We next discuss the adaptation procedure which adapts the meta-trained policy θ̂meta to a new task Dnew with few data. MQL optimizes the adaptation objective introduced in (2) into two steps.
1. Vanilla off-policy adaptation: The first step is to update the policy using the new data as
arg max θ
{ E
τ∼Dnew
[ `new(θ) ] − λ
2 ‖θ − θ̂meta‖22
} . (18)
The quadratic penalty ‖θ − θ̂meta‖2 keeps the parameters close to θ̂meta. This is crucial to reducing the variance of the model that is adapted using few data from the new task (Reddi et al., 2015). Off-policy learning is critical in this step because of its sample efficiency. We initialize θ to θ̂meta while solving (18).
2. Importance-ratio corrected off-policy updates: The second step of MQL exploits the metatraining replay buffer. Meta-training tasksDmeta are disjoint fromDnew but because they are expected to come from the same task distribution, transitions collected during meta-training can potentially be exploited to adapt the policy. This is difficult to do on two counts. First, the meta-training transitions do not come from Dnew. Second, even for transitions from the same task, it is non-trivial to update the policy because of extrapolation error (Fujimoto et al., 2018a): the value function has high error on states it has not seen before. Our use of the propensity score to reweigh transitions is a simpler version of the conditional generative model used by Fujimoto et al. (2018a) in this context.
MQL fits a logistic classifier on a mini-batch of transitions from the meta-training replay buffer and the transitions collected from the new task in step 1. The context variable zt is the feature for this classifier. The logistic classifier estimates the importance ratio β(τ ;Dnew,Dmeta) and can be used to reweigh data from the meta-training replay buffer for taking updates as
arg max θ
{ E
τ∼Dmeta
[ β(τ ;Dnew,Dmeta) `new(θ) ] − λ
2 ‖θ − θ̂meta‖22
} . (19)
We have again included a quadratic penalty ‖θ− θ̂meta‖2 that keeps the new parameters close to θ̂meta. Estimating the importance ratio involves solving a convex optimization problem on few samples (typically, 200 from the new task and 200-400 from the meta-training tasks). This classifier allows MQL to exploit the large amount of past data. In practice, we perform as many as 100× more weight updates using (19) than (18).
Remark 3 (Picking the coefficient λ). Following Fakoor et al. (2019), we pick
λ = 1− ÊSS
for both the steps (18–19). This relaxes the quadratic penalty if the new task is similar to the metatraining tasks (ÊSS is large) and vice-versa. While λ could be tuned as a hyper-parameter, our empirical results show that adapting it using ÊSS is a simple and effective heuristic.
Remark 4 (Details of estimating the importance ratio). It is crucial to ensure that the logistic classifier for estimating β generalizes well if we are to reweigh transitions in the meta-training replay buffer that are different than the ones the logistic was fitted upon. We do so in two ways: (i) the regularization co-efficient in (11) is chosen to be relatively large, that way we prefer false negatives than risk false positives; (ii) transitions with very high β are valuable for updating (19) but cause a large variance in stochastic gradient descent-based updates, we clip β before taking the update in (19). The clipping constant is a hyper-parameter and is given in Sec. 4.
MQL requires having access to the meta-training replay buffer during adaptation. This is not a debilitating requirement and there are a number of clustering techniques that can pick important transitions from the replay-buffer if a robotic agent is limited by available hard-disk space. The meta-training replay buffer is at most 3 GB for the experiments in Sec. 4.
4 E X P E R I M E N T S
This section presents the experimental results of MQL. We first discuss the setup and provide details the benchmark in Sec. 4.1. This is followed by empirical results and ablation experiments in Sec. 4.2.
4 . 1 S E T U P
Tasks and algorithms: We use the MuJoCo (Todorov et al., 2012) simulator with OpenAI Gym (Brockman et al., 2016) on continuous-control meta-RL benchmark tasks. These tasks have different rewards, randomized system parameters (Walker-2D-Params) and have been used in previous papers such as Finn et al. (2017); Rothfuss et al. (2018); Rakelly et al. (2019). We compare against standard baseline algorithms, namely MAML (TRPO (Schulman et al., 2015) variant) (Finn et al., 2017), RL2 (Duan et al., 2016), ProMP (Rothfuss et al., 2018) and PEARL (Rakelly et al., 2019). We obtained the training curves and hyper-parameters for all the three algorithms from the published code by Rakelly et al. (2019).
We will compare the above algorithms against: (i) vanilla TD3 (Fujimoto et al., 2018a) without any adaptation on new tasks, (ii) TD3-context: TD3 with GRU-based context Sec. 3.1.1 without any adaptation, and (iii) MQL: TD3 with context and adaptation on new task using the procedure in Sec. 3.2. All the three variants use
the multi-task objective for meta-training (15). We use Adam (Kingma & Ba, 2014) for optimizing all the loss functions in this paper.
Evaluation: Current meta-RL benchmarks lack a systematic evaluation procedure. 2 For each environment, Rakelly et al. (2019) constructed a fixed set of meta-training tasks (Dmeta) and a validation set of tasks Dnew that are disjoint from the meta-training set. To enable direct comparison with published empirical results, we closely followed the evaluation code of Rakelly et al. (2019) to create these tasks. We also use the exact same evaluation protocol as that of these authors, e.g., 200 timesteps of data from the new task, or the number of evaluation episodes. We report the undiscounted return on the validation tasks with statistics computed across 5 random seeds.
4 . 2 R E S U LT S
Our first result, in Fig. 2, is to show that vanilla off-policy learning with context, without any adaptation is competitive with state of the art meta-RL algorithms. We used a standard implementation of TD3 and train on the meta-training tasks using the multi-task objective (15). Hyper-parameters for these tasks are provided in Appendix D. This result is surprising and had gone unnoticed in the current literature. Policies that have access to the context can easily generalize to the validation tasks and achieve performance that is comparable to more sophisticated meta-RL algorithms.
We next evaluate MQL against existing meta-RL benchmarks on all environments. The results are shown in Fig. 3. We see that for all environments except Walker-2D-Params and Ant-Goal-2D, MQL obtains comparable or better returns on the validation tasks. In most cases, in particular for the challenging Humanoid-Direc-2D environment, MQL converges faster than existing algorithms. MAML and ProMP require about 100M time-steps to converge to returns that are significantly worse
2For instance, training and validation tasks are not explicitly disjoint in Finn et al. (2017); Rothfuss et al. (2018) and these algorithms may benefit during adaptation from having seen the same task before. The OpenAI Gym environments used in Finn et al. (2017); Rothfuss et al. (2018); Rakelly et al. (2019) provide different rewards for the same task. The evaluation protocol in existing papers, e.g., length of episode for a new task, amount of data available for adaptation from the new task, is not consistent. This makes reproducing experiments and comparing numerical results extremely difficult.
than the returns of off-policy algorithms like MQL and PEARL. Compare the training curve for TD3-context for the Ant-Goal-2D environment in Fig. 2 with that of the same environment in Fig. 3: the former shows a prominent dip in performance as meta-training progresses; this dip is absent in Fig. 3 and can be attributed to the adaptation phase of MQL.
4 . 3 A B L AT I O N E X P E R I M E N T S
We conduct a series of ablation studies to analyze the different components of the MQL algorithm. We use two environments for this purpose, namely Half-Cheetah-Fwd-Back and Ant-Fwd-Back. Fig. 4a shows that the adaptation in MQL in (18) and (19) improves performance. Also observe that MQL has a smaller standard deviation in the returns as compared to TD3-context which does not perform any adaptation; this can be seen as the adaptation phase making up for the lost performance of the meta-trained policy on a difficult task. Next, we evaluate the importance of the additional data from the replay buffer in MQL. Fig. 4b compares the performance of MQL with and without updates in (19). We see that the old data, even if it comes from different tasks, is useful to improve the performance on top of (18). Fig. 4c shows the effectiveness of setting λ = 1− ÊSS as compared to a fixed value of λ = 0.5. We see that modulating the quadratic penalty with ÊSS helps, the effect is minor for Sec. 4.3. The ideal value of λ depends on a given task and using 1 − ÊSS can help to adjust to different tasks without the need to do hyper-parameter search per task. Finally, Fig. 5 shows the evolution of λ and β(z) during meta-training. The coefficient λ is about 0.55 and β(z) is 0.8 for a large fraction of the time. The latter indicates that propensity score estimation is successful in sampling transitions from the meta-training replay buffer that are similar to the validation tasks. The value of λ remains relatively unchanged during training. This value indicates the fraction of transitions in the old data that are similar to those from the new tasks; since there are two distinct tasks in Ant-Fwd-Back, the value λ = 0.55 is appropriate.
4 . 4 R E L AT E D W O R K
Learning to learn: The idea of building an inductive bias for learning a new task by training on a large number of related tasks was established in a series of works (Utgoff, 1986; Schmidhuber, 1987; Baxter, 1995; Thrun, 1996; Thrun & Pratt, 2012). These papers propose building a base learner that fits on each task and a meta-learner that learns properties of the base learners to output a new base
learner for a new task. The recent literature instantiates this idea in two forms: (i) the meta-learner directly predicts the base-learner (Wang et al., 2016; Snell et al., 2017) and (ii) the meta-learner learns the updates of the base-learner (Bengio et al., 1992; Hochreiter et al., 2001; Finn et al., 2017).
Meta-training versus multi-task training: Metatraining aims to train a policy that can be adapted efficiently on a new task. Conceptually, the improved efficiency of a meta-learner comes from two things: (i) building a better inductive bias to initialize the learning (Schmidhuber et al., 1997; Baxter, 1995; 2000; Mitchell, 1980), or (ii) learning a better learning procedure (Bengio et al., 1997; Lee et al., 2019). The two notions of meta-learning above are complementary to each other and in fact, most recent literature using deep neural networks, e.g., MAML (Finn et al., 2017) and Prototypical Networks (Snell et al., 2017) confirms to the first notion of building a better inductive bias.
The multi-task training objective in MQL is the simplest possible instantiation of this idea: it maximizes the average reward on all tasks and learns a better prior without explicitly training for improving adaptation. This aspect of MQL coincides with a recent trend in meta-learning for image classification where it has been observed that modifications to episodic meta-training (Snell et al., 2017; Gidaris & Komodakis, 2018; Chen et al., 2018), or even foregoing meta-training completely (Dhillon et al., 2019) performs better. We speculate two reasons for this phenomenon: (i) meta-training methods are complex to implement and tune, and (ii) powerful function classes such as deep neural networks may have leftover capacity to adapt to a new task even if they are not explicitly trained for adaptation.
Context-based approaches: Both forms of meta-learning above have been employed relatively successfully for image classification (Snell et al., 2017; Ravi & Larochelle, 2016; Finn et al., 2017). It has however been difficult to replicate that empirical performance in RL: sensitivity to hyperparameters (Henderson et al., 2018) precludes directly predicting the base-learner while long-range temporal dependencies make it difficult to learn the updates of the base learner (Nichol et al., 2018). Recent methods for meta-RL instead leverage context and learn a policy that depends on just on the current state xt but on the previous history. This may be done in a recurrent fashion (Heess et al., 2015; Hausknecht & Stone, 2015) or by learning a latent representation of the task (Rakelly et al., 2019). Context is a powerful construct: as Fig. 1 shows, even a simple vanilla RL algorithm (TD3) when combined with context performs comparably to state-of-the-art meta-RL algorithms. However, context is a meta-training technique, it does not suggest a way to adapt a policy to a new task. For instance, Rakelly et al. (2019) do not update parameters of the policy on a new task. They rely on the latent representation of the context variable generalizing to new tasks. This is difficult if the new task is different from the training tasks; we discuss this further in Sec. 3.1.1.
Policy-gradient-based algorithms versus off-policy methods: Policy-gradient-based methods have high sample complexity (Ilyas et al., 2018). This is particularly limiting for meta-RL (Finn et al., 2017; Rothfuss et al., 2018; Houthooft et al., 2018) where one (i) trains on a large number of tasks and, (ii) aims to adapt to a new task with few data. Off-policy methods offer substantial gains in sample complexity. This motivates our use of off-policy updates for both meta-training and adaptation. Off-policy updates allow using past data from other policies. MQL exploits this substantially, it takes up to 100× more updates using old data than new data during adaptation. Off-policy algorithms are typically very sensitive to hyper-parameters (Fujimoto et al., 2018a) but we show that MQL is robust to such sensitivity because it adapts automatically to the distribution shift using the Effective Sample Size (ESS).
Propensity score estimation has been extensively studied in both statistics (Robert & Casella, 2013; Quionero-Candela et al., 2009) and RL (Dudı́k et al., 2011; Jiang & Li, 2015; Kang et al., 2007; Bang & Robins, 2005). It is typically used to reweigh data from the proposal distribution to compute estimators on the target distribution. MQL uses propensity scores in a novel way: we fit a propensity score estimator on a subset of the meta-training replay buffer and use this model to sample transitions
from the replay buffer that are similar to the new task. The off-policy updates in MQL are essential to exploiting this data. The coefficient of the proximal term in the adaptation-phase objective (18–19) using the effective sample size (ESS) is inspired from the recent work of Fakoor et al. (2019).
5 D I S C U S S I O N
The algorithm proposed in this paper, namely MQL, builds upon on three simple ideas. First, Qlearning with context is sufficient to be competitive on current meta-RL benchmarks. Second, maximizing the average reward of training tasks is an effective meta-learning technique. The meta-training phase of MQL is significantly simpler than that of existing algorithms and yet it achieves comparable performance to the state of the art. This suggests that we need to re-think meta-learning in the context of rich function approximators such as deep networks. Third, if one is to adapt to new tasks with few data, it is essential to exploit every available avenue. MQL recycles data from the meta-training replay buffer using propensity estimation techniques. This data is essentially free and is completely neglected by other algorithms. This idea can potentially be used in problems outside RL such as few-shot and zero-shot image classification.
Finally, this paper sheds light on the nature of benchmark environments in meta-RL. The fact that even vanilla Q-learning with a context variable—without meta-training and without any adaptation— is competitive with state of the art algorithms indicates that (i) training and validation tasks in the current meta-RL benchmarks are quite similar to each other and (ii) current benchmarks may be insufficient to evaluate meta-RL algorithms. Both of these are a call to action and point to the need to invest resources towards creating better benchmark problems for meta-RL that drive the innovation of new algorithms.
R E F E R E N C E S
Deepak Agarwal, Lihong Li, and Alexander Smola. Linear-time estimators for propensity scores. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 93–100, 2011.
Heejung Bang and James M Robins. Doubly robust estimation in missing data and causal inference models. Biometrics, 61(4):962–973, 2005.
Jonathan Baxter. Learning internal representations. Flinders University of S. Aust., 1995.
Jonathan Baxter. A model of inductive bias learning. Journal of artificial intelligence research, 12:149–198, 2000.
Samy Bengio, Yoshua Bengio, Jocelyn Cloutier, and Jan Gecsei. On the optimization of a synaptic learning rule. In Preprints Conf. Optimality in Artificial and Biological Neural Networks, pp. 6–8. Univ. of Texas, 1992.
Samy Bengio, Yoshua Bengio, Jocelyn Cloutier, and Jan Gecsei. On the optimization of a synaptic learning rule, 1997.
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym. arXiv:1606.01540, 2016.
Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, and Jia-Bin Huang. A closer look at few-shot classification. 2018.
Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv:1406.1078, 2014.
Guneet S Dhillon, Pratik Chaudhari, Avinash Ravichandran, and Stefano Soatto. A baseline for few-shot image classification. arXiv:1909.02729, 2019.
Yan Duan, John Schulman, Xi Chen, Peter L Bartlett, Ilya Sutskever, and Pieter Abbeel. Rl2: Fast reinforcement learning via slow reinforcement learning. arXiv:1611.02779, 2016.
Miroslav Dudı́k, John Langford, and Lihong Li. Doubly robust policy evaluation and learning. arXiv:1103.4601, 2011.
Vı́ctor Elvira, Luca Martino, and Christian P Robert. Rethinking the effective sample size. arXiv:1809.04129, 2018.
Rasool Fakoor, Pratik Chaudhari, and Alexander J Smola. P3o: Policy-on policy-off policy optimization. arXiv:1905.01756, 2019.
Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1126– 1135. JMLR. org, 2017.
Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. arXiv:1812.02900, 2018a.
Scott Fujimoto, Herke van Hoof, and Dave Meger. Addressing function approximation error in actor-critic methods. arXiv:1802.09477, 2018b.
Spyros Gidaris and Nikos Komodakis. Dynamic few-shot visual learning without forgetting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4367–4375, 2018.
Matthew Hausknecht and Peter Stone. Deep recurrent q-learning for partially observable mdps. In 2015 AAAI Fall Symposium Series, 2015.
Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, and David Silver. Memory-based control with recurrent neural networks. arXiv:1512.04455, 2015.
Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep reinforcement learning that matters. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
Sepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to learn using gradient descent. In International Conference on Artificial Neural Networks, pp. 87–94. Springer, 2001.
Rein Houthooft, Yuhua Chen, Phillip Isola, Bradly Stadie, Filip Wolski, OpenAI Jonathan Ho, and Pieter Abbeel. Evolved policy gradients. In Advances in Neural Information Processing Systems, pp. 5400–5409, 2018.
Andrew Ilyas, Logan Engstrom, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, and Aleksander Madry. Are deep policy gradient algorithms truly policy gradient algorithms? arXiv:1811.02553, 2018.
Nan Jiang and Lihong Li. Doubly robust off-policy value evaluation for reinforcement learning. arXiv:1511.03722, 2015.
Joseph DY Kang, Joseph L Schafer, et al. Demystifying double robustness: A comparison of alternative strategies for estimating a population mean from incomplete data. Statistical science, 22(4):523–539, 2007.
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014.
Augustine Kong. A note on importance sampling using standardized weights. University of Chicago, Dept. of Statistics, Tech. Rep, 348, 1992.
Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, and Stefano Soatto. Meta-learning with differentiable convex optimization. arXiv:1904.03758, 2019.
Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv:1509.02971, 2015.
Tom M Mitchell. The need for biases in learning generalizations. Department of Computer Science, Laboratory for Computer Science Research . . . , 1980.
Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms. arXiv:1803.02999, 2018.
Joaquin Quionero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D Lawrence. Dataset shift in machine learning. The MIT Press, 2009.
Kate Rakelly, Aurick Zhou, Deirdre Quillen, Chelsea Finn, and Sergey Levine. Efficient off-policy metareinforcement learning via probabilistic context variables. arXiv:1903.08254, 2019.
Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. 2016.
Sashank J. Reddi, Barnabás Póczos, and Alexander J. Smola. Doubly robust covariate shift correction. In AAAI, 2015.
Sidney I Resnick. A probability path. Springer Science & Business Media, 2013.
Christian Robert and George Casella. Monte Carlo statistical methods. Springer Science & Business Media, 2013.
Jonas Rothfuss, Dennis Lee, Ignasi Clavera, Tamim Asfour, and Pieter Abbeel. Promp: Proximal meta-policy search. arXiv:1810.06784, 2018.
Jurgen Schmidhuber. Evolutionary principles in self-referential learning. On learning how to learn: The meta-meta-... hook.) Diploma thesis, Institut f. Informatik, Tech. Univ. Munich, 1987.
Jürgen Schmidhuber, Jieyu Zhao, and Marco Wiering. Shifting inductive bias with success-story algorithm, adaptive levin search, and incremental self-improvement. Machine Learning, 28(1):105–130, Jul 1997. ISSN 1573-0565.
John Schulman, Sergey Levine, Pieter Abbeel, Michael I Jordan, and Philipp Moritz. Trust region policy optimization. In International Conference on Machine Learning, volume 37, pp. 1889–1897, 2015.
David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In International Conference on Machine Learning, 2014.
Adrian Smith. Sequential Monte Carlo methods in practice. Springer Science & Business Media, 2013.
Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pp. 4077–4087, 2017.
Sebastian Thrun. Is learning the n-th thing any easier than learning the first? In Advances in neural information processing systems, pp. 640–646, 1996.
Sebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 2012.
Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033. IEEE, 2012.
Paul E Utgoff. Shift of bias for inductive concept learning. Machine learning: An artificial intelligence approach, 2:107–148, 1986.
Jane X. Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z. Leibo, Rémi Munos, Charles Blundell, Dharshan Kumaran, and Matthew Botvinick. Learning to reinforcement learn. CoRR, abs/1611.05763, 2016. URL http://arxiv.org/abs/1611.05763.
A P S E U D O - C O D E
The pseudo-code for MQL during training and adaption are given in Algorithm 1 and Algorithm 2. After MQL is trained for a given environment as described in Algorithm 1, it returns the meta-trained policy θ and replay buffer containing train tasks.
Next, Algorithm 2 runs the adaptation procedure which adapts the meta-trained policy to a test taskD with few data. To do so, MQL optimizes the adaptation objective into two steps. After gathering data from a test task D, MQL first updates the policy using the new data (line 4). MQL then fits a logistic classifier on a mini-batch of transitions from the meta-training replay buffer and the transitions collected from the test task and then estimates ÊSS (lines 5-6). Finally, the adaptation step runs for n iterations (lines 7 - 10) in which MQL can exploit past data in which it uses propensity score to decide whether or not a given sample is related to the current test task.
Algorithm 1: MQL - Meta-training Input: Set of training tasks Dmeta
1 Initialize the replay buffer 2 Initialize parameters θ of an off-policy method, e.g., TD3 3 while not done do 4 // Rollout and update policy 5 Sample a task D ∼ Dmeta 6 Gather data from task D using policy πθ while feeding transitions through context GRU. Add trajectory to the replay buffer. 7 b ← Sample mini-batch from buffer 8 Update parameters θ using mini-batch b and Eqn. (15)
9 θmeta ← θ 10 return θmeta , replay buffer
Algorithm 2: MQL - Adaptation Input: Test task D, meta-training replay buffer, meta-trained policy θmeta
1 Initialize temporary buffer buf 2 θ ← θmeta 3 buf ← Gather data from D using πθmeta 4 Update Eqn. (18) using buf 5 Fit β(D) using buf and meta-training replay buffer using Eqn. (12)
6 Estimate ÊSS using β(D) using Eqn. (13) 7 for i ≤ n do 8 b ← sample mini-batch from meta-training replay buffer 9 Calculate β for b
10 Update θ using Eqn. (19)
11 Evaluate θ on a new rollout from task D 12 return θ
B O U T- O F - D I S T R I B U T I O N TA S K S
MQL is designed for explicitly using data from the new task along with off-policy data from old, possibly very different tasks. This is on account of two things: (i) the loss function of MQL does not use the old data if it is very different from the new task, β is close to zero for all samples, and (ii) the first term in (18) makes multiple updates using data from the new task. To explore this aspect, we create an out-of-distribution task using the “Half-Cheetah-Vel” environment wherein we use disjoint sets of velocities for meta-training and testing. The setup is as follows:
• Half-Cheetah-Vel-OOD-Medium: target velocity for a training task is sampled uniformly randomly from [0, 2.5] while that for test task is sampled uniformly randomly from [2.5, 3.0].
This is what we call “medium” hardness task because although the distributions of train and test velocities is disjoint, they are close to each other.
• Half-Cheetah-Vel-OOD-Hard: target velocity for a training task is sampled uniformly randomly from [0, 1.5] while that for test task is sampled uniformly randomly from [2.5, 3.0]. This is a “hard” task because the distributions of train and test velocities are far away from each other.
Fig. 6a shows that MQL significantly outperforms PEARL when the train and test target velocities come from disjoint sets. We used the published code of PEARL (Rakelly et al., 2019) for this experiment. This shows that the adaptation in MQL is crucial to generalizing to new situations which are not a part of the meta-training process. Fig. 6b shows the evolution of the proximal penalty coefficient λ and the propensity score β(z) during meta-training for the medium-hard task. We see that λ ≈ 0.8 while β(z) ≈ 0.2 throughout training. This indicates that MQL automatically adjusts its test-time adaptation to use only few samples in (19) if the test task provides transitions quite different than those in the replay buffer.
We next discuss results on the harder task Half-Cheetah-Vel-OOD-Hard. There is a very large gap between training and test target velocities in this case. Fig. 7a shows the comparison with the same test protocol as the other experiments in this paper. In particular, we collect 200 time-steps from the new task and use it for adaptation in both MQL and TD3-context. Since this task is particularly hard, we also ran an experiment where 1200 time-steps (6 episodes) are given to the two algorithms for adaptation. The results are shown in Fig. 7b. In both cases, we see that MQL is better than TD3-context by a large margin (the standard deviation on these plots is high because the environment is hard). Note that since we re-initialize the hidden state of the context network at the beginning of each episode, TD3-context cannot take advantage of the extra time-steps. MQL on the other hand updates the policy explicitly and can take advantage of this extra data.
For sake of being thorough, we collected 800 time-steps from the new task from the same episode, the results are shown in Fig. 8a. We again notice that MQL results in slightly higher rewards than TD3-context in spite of the fact that both the algorithms suffer large degradation in performance as compared to Figs. 7a and 7b.
Figs. 7c, 7d and 8b show that the proximal penalty coefficient λ ≈ 1 and the propensity score β(z) ≈ 0 for a large fraction of training. This proves that MQL is able to automatically discard samples unrelated to the new test during the adaptation phase.
C M O R E A B L AT I O N S T U D I E S
We conduct a series of additional ablation studies to analyze the different components of the MQL algorithm. We use two environments for this purpose, namely Half-Cheetah-Vel and Walker-2DParams. Fig. 9 and Fig. 10 show the result of these experiments. These experiments show that adaptation phase is more useful for Half-Cheetah-Vel than Walker-2D-Params as test and training tasks are very similar in Walker-2D-Params which helps TD3-context achieves strong performance that leaves no window for improvement with adaptation.
D H Y P E R - PA R A M E T E R S A N D M O R E D E TA I L S O F T H E E M P I R I C A L R E S U LT S | 1. What is the main contribution of the paper regarding meta Q-learning?
2. What are the strengths of the proposed approach compared to other relevant meta-RL algorithms?
3. Do you have any questions or concerns regarding the implementation and adaptation phases?
4. How does the proposed method potentially generalize to out-of-distribution tasks, and what kind of experiments could support this claim? | Review | Review
Summary
-------------
The authors propose meta Q-learning, an algorithm for off-policy meta RL. The idea is to meta-train a context-dependent policy to maximize the expected return averaged over all training tasks, and then adapt this policy to any new task by leveraging both novel and past experience using importance sampling corrections. The proposed approach is evaluated on standard Mujoco benchmarks and compared to other relevant meta-rl algorithms.
Comments
--------------
Meta-rl is a relevant direction for mitigating the sample-complexity of rl agents and allowing them to scale to larger domains. This work proposes interesting ideas and overall it constitutes an nice contribution. In particular, I found interesting (and at the same time worrying) that a simple q-learning algorithm with hidden contexts compares favorably to state-of-art meta-rl approaches in standard benchmarks. The paper is well-organized and easy to read. Some comments/questions follow.
1. (15) is probably miss-written (theta is trained to maximize the TD error)
2. In the adaptation phase, are (18) and (19) performed one after the other? Could they be done at the same time by setting to 1 the importance weights of the new trajectories and sampling from the whole experience (new and old)?
3. Note that the ESS estimator (13) diverges to infinity when all weights are close to zero. Is it clipped to [0,1] in the experiments? See e.g. [1] or [2] for more robust estimators that are bounded.
4. Since many recent works try to improve the generalization capabilities of meta-rl algorithms, I was wondering how the proposed approach generalizes to out-of-distribution tasks (i.e., tasks that are unlikely to occur at meta-training). Though it is never mentioned in the paper, I believe the proposed method has the potential to be robust to negative transfer since the importance weights (which would be very small for very different tasks) should automatically discard old data and focus on new data alone. This is in contrast to many existing methods where the meta-trained model might negatively bias the learning of very different tasks. I think an experiment of this kind would be valuable to improve the paper.
[1] Elvira, V., Martino, L., & Robert, C. P. (2018). Rethinking the effective sample size. arXiv preprint arXiv:1809.04129.
[2] Tirinzoni, A., Salvini, M., & Restelli, M. (2019, May). Transfer of Samples in Policy Search via Multiple Importance Sampling. In International Conference on Machine Learning (pp. 6264-6274). |
ICLR | Title
Meta-Q-Learning
Abstract
This paper introduces Meta-Q-Learning (MQL), a new off-policy algorithm for meta-Reinforcement Learning (meta-RL). MQL builds upon three simple ideas. First, we show that Q-learning is competitive with state-of-the-art meta-RL algorithms if given access to a context variable that is a representation of the past trajectory. Second, a multi-task objective to maximize the average reward across the training tasks is an effective method to meta-train RL policies. Third, past data from the meta-training replay buffer can be recycled to adapt the policy on a new task using off-policy updates. MQL draws upon ideas in propensity estimation to do so and thereby amplifies the amount of available data for adaptation. Experiments on standard continuous-control benchmarks suggest that MQL compares favorably with the state of the art in meta-RL. 1 I N T R O D U C T I O N
N/A
M E TA - Q - L E A R N I N G
Rasool Fakoor1, Pratik Chaudhari2∗, Stefano Soatto1, Alexander Smola1 1 Amazon Web Services 2 University of Pennsylvania Email: {fakoor, soattos, smola}@amazon.com, pratikac@seas.upenn.edu
A B S T R A C T
This paper introduces Meta-Q-Learning (MQL), a new off-policy algorithm for meta-Reinforcement Learning (meta-RL). MQL builds upon three simple ideas. First, we show that Q-learning is competitive with state-of-the-art meta-RL algorithms if given access to a context variable that is a representation of the past trajectory. Second, a multi-task objective to maximize the average reward across the training tasks is an effective method to meta-train RL policies. Third, past data from the meta-training replay buffer can be recycled to adapt the policy on a new task using off-policy updates. MQL draws upon ideas in propensity estimation to do so and thereby amplifies the amount of available data for adaptation. Experiments on standard continuous-control benchmarks suggest that MQL compares favorably with the state of the art in meta-RL.
1 I N T R O D U C T I O N
Reinforcement Learning (RL) algorithms have demonstrated good performance on simulated data. There are however two main challenges in translating this performance to real robots: (i) robots are complex and fragile which precludes extensive data collection, and (ii) a real robot may face an environment that is different than the simulated environment it was trained in. This has fueled research into MetaReinforcement Learning (meta-RL) which develops algorithms that “meta-train” on a large number of different environments, e.g., simulated ones, and aim to adapt to a new environment with few data.
How well does meta-RL work today? Fig. 1 shows the performance of two prototypical meta-RL algorithms on four standard continuous-control benchmarks.1 We compared them to the following simple baseline: an off-policy RL algorithm (TD3 by Fujimoto et al. (2018b)) and which was trained to maximize the average reward over all training tasks and modified to use a “context variable” that represents the trajectory. All algorithms in this figure use the same evaluation protocol. It is surprising that this
simple non-meta-learning-based method is competitive with state-of-the-art meta-RL algorithms. This is the first contribution of our paper: we demonstrate that it is not necessary to meta-train policies to do well on existing benchmarks.
Our second contribution is an off-policy meta-RL algorithm named Meta-Q-Learning (MQL) that builds upon the above result. MQL uses a simple meta-training procedure: it maximizes the average
∗Work done while at Amazon Web Services 1We obtained the numbers for MAML and PEARL from training logs published by Rakelly et al. (2019).
rewards across all meta-training tasks using off-policy updates to obtain
θ̂meta = arg max θ
1
n n∑ k=1 E τ∼Dk [ `k(θ) ] (1)
where `k(θ) is the objective evaluated on the transition τ obtained from the task Dk(θ), e.g., 1-step temporal-difference (TD) error would set `k(θ) = TD2(θ; τ). This objective, which we call the multi-task objective, is the simplest form of meta-training.
For adapting the policy to a new task, MQL samples transitions from the meta-training replay buffer that are similar to those from the new task. This amplifies the amount of data available for adaptation but it is difficult to do because of the large potential bias. We use techniques from the propensity estimation literature for performing this adaptation and the off-policy updates of MQL are crucial to doing so. The adaptation phase of MQL solves
arg max θ
{ E
τ∼Dnew
[ `new(θ) ] + E τ∼Dmeta [ β(τ ;Dnew,Dmeta) `new(θ) ] − ( 1− ÊSS ) ‖θ − θ̂meta‖22 } (2)
whereDmeta is the meta-training replay buffer, the propensity score β(τ ;Dnew,Dmeta) is the odds of a transition τ belonging to Dnew versusDmeta, and ÊSS is the Effective Sample Size between Dnew and Dmeta that is a measure of the similarly of the new task with the meta-training tasks. The first term computes off-policy updates on the new task, the second term performs β(·)-weighted off-policy updates on old data, while the third term is an automatically adapting proximal term that prevents degradation of the policy during adaptation.
We perform extensive experiments in Sec. 4.2 including ablation studies using standard meta-RL benchmarks that demonstrate that MQL policies obtain higher average returns on new tasks even if they are meta-trained for fewer time-steps than state-of-the-art algorithms.
2 B A C K G R O U N D
This section introduces notation and formalizes the meta-RL problem. We discuss techniques for estimating the importance ratio between two probability distributions in Sec. 2.2.
Consider a Markov Decision Processes (MDP) denoted by
xt+1 = f k(xt, ut, ξt) x0 ∼ pk0 , (3)
where xt ∈ X ⊂ Rd are the states and ut ∈ U ⊂ Rp are the actions. The dynamics fk is parameterized by k ∈ {1, . . . , n} where each k corresponds to a different task. The domain of all these tasks, X for the states and U for the actions, is the same. The distribution pk0 denotes the initial state distribution and ξt is the noise in the dynamics. Given a deterministic policy uθ(xt), the actionvalue function for γ-discounted future rewards rkt := r
k(xt, uθ(xt)) over an infinite time-horizon is
qk(x, u) = E ξ(·) [ ∞∑ t=0 γt rkt |x0 = x, u0 = u, ut = uθ(xt) ] . (4)
Note that we have assumed that different tasks have the same state and action space and may only differ in their dynamics fk and reward function rk. Given one task k ∈ {1, . . . , n}, the standard Reinforcement Learning (RL) formalism solves for
θ̂k = arg max θ `k(θ) where `k(θ) = E x∼p0
[ qk(x, uθ(x)) ] . (5)
Let us denote the dataset of all states, actions and rewards pertaining to a task k and policy uθ(x) by Dk(θ) = { xt, uθ(xt), r k, xt+1 = f k(xt, uθ(xt), ξt) } t≥0, x(0)∼pk0 , ξ(·) ;
we will often refer toDk as the “task” itself. The Deterministic Policy Gradient (DPG) algorithm (Silver et al., 2014) for solving (5) learns a ϕ-parameterized approximation qϕ to the optimal value func-
tion qk by minimizing the Bellman error and the optimal policy uθ that maximizes this approximation by solving the coupled optimization problem
ϕ̂k = arg min ϕ E τ∼Dk
[ ( qϕ(x, u)− rk − γ qϕ(x′, uθ̂k(x ′)) )2 ] ,
θ̂k = arg max θ E τ∼Dk
[ q ϕ̂k (x, uθ(x)) ] .
(6)
The 1-step temporal difference error (TD error) is defined as TD2(θ) = ( qϕ(x, u)− rk − γ qϕ(x′, uθ(x′)) )2 (7)
where we keep the dependence of TD(·) on ϕ implicit. DPG, or its deep network-based variant DDPG (Lillicrap et al., 2015), is an off-policy algorithm. This means that the expectations in (6) are computed using data that need not be generated by the policy being optimized (uθ), this data can come from some other policy.
In the sequel, we will focus on the parameters θ parameterizing the policy. The parameters ϕ of the value function are always updated to minimize the TD-error and are omitted for clarity.
2 . 1 M E TA - R E I N F O R C E M E N T L E A R N I N G ( M E TA - R L )
Meta-RL is a technique to learn an inductive bias that accelerates the learning of a new task by training on a large of number of training tasks. Formally, meta-training on tasks from the meta-training set Dmeta = { Dk } k=1,...,n involves learning a policy
θ̂meta = arg max θ
1
n n∑ k=1 `kmeta(θ) (8)
where `kmeta(θ) is a meta-training loss that depends on the particular method. Gradient-based meta-RL, let us take MAML by Finn et al. (2017) as a concrete example, sets
`kmeta(θ) = ` k(θ + α∇θ`k(θ)) (9)
for a step-size α > 0; `k(θ) is the objective of non-meta-RL (5). In this case `kmeta is the objective obtained on the task Dk after one (or in general, more) updates of the policy on the task. The idea behind this is that even if the policy θ̂meta does not perform well on all tasks in Dmeta it may be updated quickly on a new task Dnew to obtain a well-performing policy. This can either be done using the same procedure as that of meta-training time, i.e., by maximizing `newmeta(θ) with the policy θ̂meta as the initialization, or by some other adaptation procedure. The meta-training method and the adaptation method in meta-RL, and meta-learning in general, can be different from each other.
2 . 2 L O G I S T I C R E G R E S S I O N F O R E S T I M AT I N G T H E P R O P E N S I T Y S C O R E
Consider standard supervised learning: given two distributions q(x) (say, train) and p(x) (say, test), we would like to estimate how a model’s predictions ŷ(x) change across them. This is formally done using importance sampling:
E x∼p(x) E y|x
[ `(y, ŷ(x)) ] = E x∼q(x) E y|x [ β(x) `(y, ŷ(x)) ] ; (10)
where y|x are the true labels of data, the predictions of the model are ŷ(x) and `(y, ŷ(x)) is the loss for each datum (x, y). The importance ratio β(x) = dpdq (x), also known as the propensity score, is the Radon-Nikodym derivative (Resnick, 2013) of the two data densities and measures the odds of a sample x coming from the distribution p versus the distribution q. In practice, we do not know the densities q(x) and p(x) and therefore need to estimate β(x) using some finite data Xq = {x1, . . . , xm} drawn from q and Xp = {x′1, . . . , x′m} drawn from p. As Agarwal et al. (2011) show, this is easy to do using logistic regression. Set zk = 1 to be the labels for the data in Xq and zk = −1 to be the labels of the data in Xp for k ≤ m and fit a logistic classifier on the combined 2m
samples by solving
w∗ = min w
1
2m ∑ (x,z) log ( 1 + e−zw >x ) + c ‖w‖2. (11)
This gives
β(x) = P(z = −1|x) P(z = 1|x) = e−w ∗>x. (12)
Normalized Effective Sample Size (ÊSS): A related quantity to β(x) is the normalized Effective Sample Size (ÊSS) which we define as the relative number of samples from the target distribution p(x) required to obtain an estimator with performance (say, variance) equal to that of the importance sampling estimator (10). It is not possible to compute the ÊSS without knowing both densities q(x) and p(x) but there are many heuristics for estimating it. A popular one in the Monte Carlo literature (Kong, 1992; Smith, 2013; Elvira et al., 2018) is
ÊSS = 1
m
( ∑m k=1 β(xk))
2∑m k=1 β(xk) 2 ∈ [0, 1] (13)
where X = {x1, . . . , xm} is some finite batch of data. Observe that if two distributions q and p are close then the ÊSS is close to one; if they are far apart the ÊSS is close to zero.
3 M Q L
This section describes the MQL algorithm. We begin by describing the meta-training procedure of MQL including a discussion of multi-task training in Sec. 3.1. The adaptation procedure is described in Sec. 3.2.
3 . 1 M E TA - T R A I N I N G
MQL performs meta-training using the multi-task objective. Note that if one sets
`kmeta(θ) , ` k(θ) = E
x∼pk0
[ qk(x, uθ(x)) ] (14)
in (8) then the parameters θ̂meta are such that they maximize the average returns over all tasks from the meta-training set. We use an off-policy algorithm named TD3 (Fujimoto et al., 2018b) as the building block and solve for
θ̂meta = arg min θ
1
n n∑ k=1 E τ∼Dk [ TD2(θ) ] ; (15)
where TD(·) is defined in (7). As is standard in TD3, we use two action-value functions parameterized by ϕ1 and ϕ2 and take their minimum to compute the target in (7). This trick known as “doubleQ-learning” reduces the over-estimation bias. Let us emphasize that (14) is a special case of the procedure outlined in (8). The following remark explains why MQL uses the multi-task objective as opposed to the meta-training objective used, for instance, in existing gradient-based meta-RL algorithms.
Remark 1. Let us compare the critical points of the m-step MAML objective (9) to those of the multi-task objective which uses (14). As is done by the authors in Nichol et al. (2018), we can perform a Taylor series expansion around the parameters θ to obtain
∇`kmeta(θ) = ∇`k(θ) + 2α(m− 1) ( ∇2`k(θ) ) ∇`k(θ) +O(α2). (16)
Further, note that∇`kmeta in (16) is also the gradient of the loss
`k(θ) + α(m− 1)‖∇`k(θ)‖22 (17)
up to first order. This lends a new interpretation that MAML is attracted towards regions in the loss landscape that under-fit on individual tasks: parameters with large ‖∇`k‖2 will be far from the local maxima of `k(θ). The parameters α and m control this under-fitting. Larger the number of gradient steps, larger the under-fitting effect. This remark suggests that the adaptation speed of gradient-based meta-learning comes at the cost of under-fitting on the tasks.
3 . 1 . 1 D E S I G N I N G C O N T E X T
As discussed in Sec. 1 and 4.4, the identity of the task in meta-RL can be thought of as the hidden variable of an underlying partially-observable MDP. The optimal policy on the entire trajectory of the states, actions and the rewards. We therefore design a recurrent context variable zt that depends on {(xi, ui, ri)}i≤t. We set zt to the hidden state at time t of a Gated Recurrent Unit (GRU by Cho et al. (2014)) model. All the policies uθ(x) and value functions qϕ(x, u) in MQL are conditioned on the context and implemented as uθ(x, z) and qϕ(x, u, z). Any other recurrent model can be used to design the context; we used a GRU because it offers a good trade-off between a rich representation and computational complexity.
Remark 2 (MQL uses a deterministic context that is not permutation invariant). We have aimed for simplicity while designing the context. The context in MQL is built using an off-the-shelf model like GRU and is not permutation invariant. Indeed, the direction of time affords crucial information about the dynamics of a task to the agent, e.g., a Half-Cheetah running forward versus backward has arguably the same state trajectory but in a different order. Further, the context in MQL is a deterministic function of the trajectory. Both these aspects are different than the context used by Rakelly et al. (2019) who design an inference network and sample a probabilistic context conditioned on a moving window. RL algorithms are quite complex and challenging to reproduce. Current meta-RL techniques which build upon them further exacerbate this complexity. Our demonstration that a simple context variable is enough is an important contribution.
3 . 2 A D A P TAT I O N T O A N E W TA S K
We next discuss the adaptation procedure which adapts the meta-trained policy θ̂meta to a new task Dnew with few data. MQL optimizes the adaptation objective introduced in (2) into two steps.
1. Vanilla off-policy adaptation: The first step is to update the policy using the new data as
arg max θ
{ E
τ∼Dnew
[ `new(θ) ] − λ
2 ‖θ − θ̂meta‖22
} . (18)
The quadratic penalty ‖θ − θ̂meta‖2 keeps the parameters close to θ̂meta. This is crucial to reducing the variance of the model that is adapted using few data from the new task (Reddi et al., 2015). Off-policy learning is critical in this step because of its sample efficiency. We initialize θ to θ̂meta while solving (18).
2. Importance-ratio corrected off-policy updates: The second step of MQL exploits the metatraining replay buffer. Meta-training tasksDmeta are disjoint fromDnew but because they are expected to come from the same task distribution, transitions collected during meta-training can potentially be exploited to adapt the policy. This is difficult to do on two counts. First, the meta-training transitions do not come from Dnew. Second, even for transitions from the same task, it is non-trivial to update the policy because of extrapolation error (Fujimoto et al., 2018a): the value function has high error on states it has not seen before. Our use of the propensity score to reweigh transitions is a simpler version of the conditional generative model used by Fujimoto et al. (2018a) in this context.
MQL fits a logistic classifier on a mini-batch of transitions from the meta-training replay buffer and the transitions collected from the new task in step 1. The context variable zt is the feature for this classifier. The logistic classifier estimates the importance ratio β(τ ;Dnew,Dmeta) and can be used to reweigh data from the meta-training replay buffer for taking updates as
arg max θ
{ E
τ∼Dmeta
[ β(τ ;Dnew,Dmeta) `new(θ) ] − λ
2 ‖θ − θ̂meta‖22
} . (19)
We have again included a quadratic penalty ‖θ− θ̂meta‖2 that keeps the new parameters close to θ̂meta. Estimating the importance ratio involves solving a convex optimization problem on few samples (typically, 200 from the new task and 200-400 from the meta-training tasks). This classifier allows MQL to exploit the large amount of past data. In practice, we perform as many as 100× more weight updates using (19) than (18).
Remark 3 (Picking the coefficient λ). Following Fakoor et al. (2019), we pick
λ = 1− ÊSS
for both the steps (18–19). This relaxes the quadratic penalty if the new task is similar to the metatraining tasks (ÊSS is large) and vice-versa. While λ could be tuned as a hyper-parameter, our empirical results show that adapting it using ÊSS is a simple and effective heuristic.
Remark 4 (Details of estimating the importance ratio). It is crucial to ensure that the logistic classifier for estimating β generalizes well if we are to reweigh transitions in the meta-training replay buffer that are different than the ones the logistic was fitted upon. We do so in two ways: (i) the regularization co-efficient in (11) is chosen to be relatively large, that way we prefer false negatives than risk false positives; (ii) transitions with very high β are valuable for updating (19) but cause a large variance in stochastic gradient descent-based updates, we clip β before taking the update in (19). The clipping constant is a hyper-parameter and is given in Sec. 4.
MQL requires having access to the meta-training replay buffer during adaptation. This is not a debilitating requirement and there are a number of clustering techniques that can pick important transitions from the replay-buffer if a robotic agent is limited by available hard-disk space. The meta-training replay buffer is at most 3 GB for the experiments in Sec. 4.
4 E X P E R I M E N T S
This section presents the experimental results of MQL. We first discuss the setup and provide details the benchmark in Sec. 4.1. This is followed by empirical results and ablation experiments in Sec. 4.2.
4 . 1 S E T U P
Tasks and algorithms: We use the MuJoCo (Todorov et al., 2012) simulator with OpenAI Gym (Brockman et al., 2016) on continuous-control meta-RL benchmark tasks. These tasks have different rewards, randomized system parameters (Walker-2D-Params) and have been used in previous papers such as Finn et al. (2017); Rothfuss et al. (2018); Rakelly et al. (2019). We compare against standard baseline algorithms, namely MAML (TRPO (Schulman et al., 2015) variant) (Finn et al., 2017), RL2 (Duan et al., 2016), ProMP (Rothfuss et al., 2018) and PEARL (Rakelly et al., 2019). We obtained the training curves and hyper-parameters for all the three algorithms from the published code by Rakelly et al. (2019).
We will compare the above algorithms against: (i) vanilla TD3 (Fujimoto et al., 2018a) without any adaptation on new tasks, (ii) TD3-context: TD3 with GRU-based context Sec. 3.1.1 without any adaptation, and (iii) MQL: TD3 with context and adaptation on new task using the procedure in Sec. 3.2. All the three variants use
the multi-task objective for meta-training (15). We use Adam (Kingma & Ba, 2014) for optimizing all the loss functions in this paper.
Evaluation: Current meta-RL benchmarks lack a systematic evaluation procedure. 2 For each environment, Rakelly et al. (2019) constructed a fixed set of meta-training tasks (Dmeta) and a validation set of tasks Dnew that are disjoint from the meta-training set. To enable direct comparison with published empirical results, we closely followed the evaluation code of Rakelly et al. (2019) to create these tasks. We also use the exact same evaluation protocol as that of these authors, e.g., 200 timesteps of data from the new task, or the number of evaluation episodes. We report the undiscounted return on the validation tasks with statistics computed across 5 random seeds.
4 . 2 R E S U LT S
Our first result, in Fig. 2, is to show that vanilla off-policy learning with context, without any adaptation is competitive with state of the art meta-RL algorithms. We used a standard implementation of TD3 and train on the meta-training tasks using the multi-task objective (15). Hyper-parameters for these tasks are provided in Appendix D. This result is surprising and had gone unnoticed in the current literature. Policies that have access to the context can easily generalize to the validation tasks and achieve performance that is comparable to more sophisticated meta-RL algorithms.
We next evaluate MQL against existing meta-RL benchmarks on all environments. The results are shown in Fig. 3. We see that for all environments except Walker-2D-Params and Ant-Goal-2D, MQL obtains comparable or better returns on the validation tasks. In most cases, in particular for the challenging Humanoid-Direc-2D environment, MQL converges faster than existing algorithms. MAML and ProMP require about 100M time-steps to converge to returns that are significantly worse
2For instance, training and validation tasks are not explicitly disjoint in Finn et al. (2017); Rothfuss et al. (2018) and these algorithms may benefit during adaptation from having seen the same task before. The OpenAI Gym environments used in Finn et al. (2017); Rothfuss et al. (2018); Rakelly et al. (2019) provide different rewards for the same task. The evaluation protocol in existing papers, e.g., length of episode for a new task, amount of data available for adaptation from the new task, is not consistent. This makes reproducing experiments and comparing numerical results extremely difficult.
than the returns of off-policy algorithms like MQL and PEARL. Compare the training curve for TD3-context for the Ant-Goal-2D environment in Fig. 2 with that of the same environment in Fig. 3: the former shows a prominent dip in performance as meta-training progresses; this dip is absent in Fig. 3 and can be attributed to the adaptation phase of MQL.
4 . 3 A B L AT I O N E X P E R I M E N T S
We conduct a series of ablation studies to analyze the different components of the MQL algorithm. We use two environments for this purpose, namely Half-Cheetah-Fwd-Back and Ant-Fwd-Back. Fig. 4a shows that the adaptation in MQL in (18) and (19) improves performance. Also observe that MQL has a smaller standard deviation in the returns as compared to TD3-context which does not perform any adaptation; this can be seen as the adaptation phase making up for the lost performance of the meta-trained policy on a difficult task. Next, we evaluate the importance of the additional data from the replay buffer in MQL. Fig. 4b compares the performance of MQL with and without updates in (19). We see that the old data, even if it comes from different tasks, is useful to improve the performance on top of (18). Fig. 4c shows the effectiveness of setting λ = 1− ÊSS as compared to a fixed value of λ = 0.5. We see that modulating the quadratic penalty with ÊSS helps, the effect is minor for Sec. 4.3. The ideal value of λ depends on a given task and using 1 − ÊSS can help to adjust to different tasks without the need to do hyper-parameter search per task. Finally, Fig. 5 shows the evolution of λ and β(z) during meta-training. The coefficient λ is about 0.55 and β(z) is 0.8 for a large fraction of the time. The latter indicates that propensity score estimation is successful in sampling transitions from the meta-training replay buffer that are similar to the validation tasks. The value of λ remains relatively unchanged during training. This value indicates the fraction of transitions in the old data that are similar to those from the new tasks; since there are two distinct tasks in Ant-Fwd-Back, the value λ = 0.55 is appropriate.
4 . 4 R E L AT E D W O R K
Learning to learn: The idea of building an inductive bias for learning a new task by training on a large number of related tasks was established in a series of works (Utgoff, 1986; Schmidhuber, 1987; Baxter, 1995; Thrun, 1996; Thrun & Pratt, 2012). These papers propose building a base learner that fits on each task and a meta-learner that learns properties of the base learners to output a new base
learner for a new task. The recent literature instantiates this idea in two forms: (i) the meta-learner directly predicts the base-learner (Wang et al., 2016; Snell et al., 2017) and (ii) the meta-learner learns the updates of the base-learner (Bengio et al., 1992; Hochreiter et al., 2001; Finn et al., 2017).
Meta-training versus multi-task training: Metatraining aims to train a policy that can be adapted efficiently on a new task. Conceptually, the improved efficiency of a meta-learner comes from two things: (i) building a better inductive bias to initialize the learning (Schmidhuber et al., 1997; Baxter, 1995; 2000; Mitchell, 1980), or (ii) learning a better learning procedure (Bengio et al., 1997; Lee et al., 2019). The two notions of meta-learning above are complementary to each other and in fact, most recent literature using deep neural networks, e.g., MAML (Finn et al., 2017) and Prototypical Networks (Snell et al., 2017) confirms to the first notion of building a better inductive bias.
The multi-task training objective in MQL is the simplest possible instantiation of this idea: it maximizes the average reward on all tasks and learns a better prior without explicitly training for improving adaptation. This aspect of MQL coincides with a recent trend in meta-learning for image classification where it has been observed that modifications to episodic meta-training (Snell et al., 2017; Gidaris & Komodakis, 2018; Chen et al., 2018), or even foregoing meta-training completely (Dhillon et al., 2019) performs better. We speculate two reasons for this phenomenon: (i) meta-training methods are complex to implement and tune, and (ii) powerful function classes such as deep neural networks may have leftover capacity to adapt to a new task even if they are not explicitly trained for adaptation.
Context-based approaches: Both forms of meta-learning above have been employed relatively successfully for image classification (Snell et al., 2017; Ravi & Larochelle, 2016; Finn et al., 2017). It has however been difficult to replicate that empirical performance in RL: sensitivity to hyperparameters (Henderson et al., 2018) precludes directly predicting the base-learner while long-range temporal dependencies make it difficult to learn the updates of the base learner (Nichol et al., 2018). Recent methods for meta-RL instead leverage context and learn a policy that depends on just on the current state xt but on the previous history. This may be done in a recurrent fashion (Heess et al., 2015; Hausknecht & Stone, 2015) or by learning a latent representation of the task (Rakelly et al., 2019). Context is a powerful construct: as Fig. 1 shows, even a simple vanilla RL algorithm (TD3) when combined with context performs comparably to state-of-the-art meta-RL algorithms. However, context is a meta-training technique, it does not suggest a way to adapt a policy to a new task. For instance, Rakelly et al. (2019) do not update parameters of the policy on a new task. They rely on the latent representation of the context variable generalizing to new tasks. This is difficult if the new task is different from the training tasks; we discuss this further in Sec. 3.1.1.
Policy-gradient-based algorithms versus off-policy methods: Policy-gradient-based methods have high sample complexity (Ilyas et al., 2018). This is particularly limiting for meta-RL (Finn et al., 2017; Rothfuss et al., 2018; Houthooft et al., 2018) where one (i) trains on a large number of tasks and, (ii) aims to adapt to a new task with few data. Off-policy methods offer substantial gains in sample complexity. This motivates our use of off-policy updates for both meta-training and adaptation. Off-policy updates allow using past data from other policies. MQL exploits this substantially, it takes up to 100× more updates using old data than new data during adaptation. Off-policy algorithms are typically very sensitive to hyper-parameters (Fujimoto et al., 2018a) but we show that MQL is robust to such sensitivity because it adapts automatically to the distribution shift using the Effective Sample Size (ESS).
Propensity score estimation has been extensively studied in both statistics (Robert & Casella, 2013; Quionero-Candela et al., 2009) and RL (Dudı́k et al., 2011; Jiang & Li, 2015; Kang et al., 2007; Bang & Robins, 2005). It is typically used to reweigh data from the proposal distribution to compute estimators on the target distribution. MQL uses propensity scores in a novel way: we fit a propensity score estimator on a subset of the meta-training replay buffer and use this model to sample transitions
from the replay buffer that are similar to the new task. The off-policy updates in MQL are essential to exploiting this data. The coefficient of the proximal term in the adaptation-phase objective (18–19) using the effective sample size (ESS) is inspired from the recent work of Fakoor et al. (2019).
5 D I S C U S S I O N
The algorithm proposed in this paper, namely MQL, builds upon on three simple ideas. First, Qlearning with context is sufficient to be competitive on current meta-RL benchmarks. Second, maximizing the average reward of training tasks is an effective meta-learning technique. The meta-training phase of MQL is significantly simpler than that of existing algorithms and yet it achieves comparable performance to the state of the art. This suggests that we need to re-think meta-learning in the context of rich function approximators such as deep networks. Third, if one is to adapt to new tasks with few data, it is essential to exploit every available avenue. MQL recycles data from the meta-training replay buffer using propensity estimation techniques. This data is essentially free and is completely neglected by other algorithms. This idea can potentially be used in problems outside RL such as few-shot and zero-shot image classification.
Finally, this paper sheds light on the nature of benchmark environments in meta-RL. The fact that even vanilla Q-learning with a context variable—without meta-training and without any adaptation— is competitive with state of the art algorithms indicates that (i) training and validation tasks in the current meta-RL benchmarks are quite similar to each other and (ii) current benchmarks may be insufficient to evaluate meta-RL algorithms. Both of these are a call to action and point to the need to invest resources towards creating better benchmark problems for meta-RL that drive the innovation of new algorithms.
R E F E R E N C E S
Deepak Agarwal, Lihong Li, and Alexander Smola. Linear-time estimators for propensity scores. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 93–100, 2011.
Heejung Bang and James M Robins. Doubly robust estimation in missing data and causal inference models. Biometrics, 61(4):962–973, 2005.
Jonathan Baxter. Learning internal representations. Flinders University of S. Aust., 1995.
Jonathan Baxter. A model of inductive bias learning. Journal of artificial intelligence research, 12:149–198, 2000.
Samy Bengio, Yoshua Bengio, Jocelyn Cloutier, and Jan Gecsei. On the optimization of a synaptic learning rule. In Preprints Conf. Optimality in Artificial and Biological Neural Networks, pp. 6–8. Univ. of Texas, 1992.
Samy Bengio, Yoshua Bengio, Jocelyn Cloutier, and Jan Gecsei. On the optimization of a synaptic learning rule, 1997.
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym. arXiv:1606.01540, 2016.
Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, and Jia-Bin Huang. A closer look at few-shot classification. 2018.
Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv:1406.1078, 2014.
Guneet S Dhillon, Pratik Chaudhari, Avinash Ravichandran, and Stefano Soatto. A baseline for few-shot image classification. arXiv:1909.02729, 2019.
Yan Duan, John Schulman, Xi Chen, Peter L Bartlett, Ilya Sutskever, and Pieter Abbeel. Rl2: Fast reinforcement learning via slow reinforcement learning. arXiv:1611.02779, 2016.
Miroslav Dudı́k, John Langford, and Lihong Li. Doubly robust policy evaluation and learning. arXiv:1103.4601, 2011.
Vı́ctor Elvira, Luca Martino, and Christian P Robert. Rethinking the effective sample size. arXiv:1809.04129, 2018.
Rasool Fakoor, Pratik Chaudhari, and Alexander J Smola. P3o: Policy-on policy-off policy optimization. arXiv:1905.01756, 2019.
Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1126– 1135. JMLR. org, 2017.
Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. arXiv:1812.02900, 2018a.
Scott Fujimoto, Herke van Hoof, and Dave Meger. Addressing function approximation error in actor-critic methods. arXiv:1802.09477, 2018b.
Spyros Gidaris and Nikos Komodakis. Dynamic few-shot visual learning without forgetting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4367–4375, 2018.
Matthew Hausknecht and Peter Stone. Deep recurrent q-learning for partially observable mdps. In 2015 AAAI Fall Symposium Series, 2015.
Nicolas Heess, Jonathan J Hunt, Timothy P Lillicrap, and David Silver. Memory-based control with recurrent neural networks. arXiv:1512.04455, 2015.
Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep reinforcement learning that matters. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
Sepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to learn using gradient descent. In International Conference on Artificial Neural Networks, pp. 87–94. Springer, 2001.
Rein Houthooft, Yuhua Chen, Phillip Isola, Bradly Stadie, Filip Wolski, OpenAI Jonathan Ho, and Pieter Abbeel. Evolved policy gradients. In Advances in Neural Information Processing Systems, pp. 5400–5409, 2018.
Andrew Ilyas, Logan Engstrom, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, and Aleksander Madry. Are deep policy gradient algorithms truly policy gradient algorithms? arXiv:1811.02553, 2018.
Nan Jiang and Lihong Li. Doubly robust off-policy value evaluation for reinforcement learning. arXiv:1511.03722, 2015.
Joseph DY Kang, Joseph L Schafer, et al. Demystifying double robustness: A comparison of alternative strategies for estimating a population mean from incomplete data. Statistical science, 22(4):523–539, 2007.
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014.
Augustine Kong. A note on importance sampling using standardized weights. University of Chicago, Dept. of Statistics, Tech. Rep, 348, 1992.
Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, and Stefano Soatto. Meta-learning with differentiable convex optimization. arXiv:1904.03758, 2019.
Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv:1509.02971, 2015.
Tom M Mitchell. The need for biases in learning generalizations. Department of Computer Science, Laboratory for Computer Science Research . . . , 1980.
Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms. arXiv:1803.02999, 2018.
Joaquin Quionero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D Lawrence. Dataset shift in machine learning. The MIT Press, 2009.
Kate Rakelly, Aurick Zhou, Deirdre Quillen, Chelsea Finn, and Sergey Levine. Efficient off-policy metareinforcement learning via probabilistic context variables. arXiv:1903.08254, 2019.
Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. 2016.
Sashank J. Reddi, Barnabás Póczos, and Alexander J. Smola. Doubly robust covariate shift correction. In AAAI, 2015.
Sidney I Resnick. A probability path. Springer Science & Business Media, 2013.
Christian Robert and George Casella. Monte Carlo statistical methods. Springer Science & Business Media, 2013.
Jonas Rothfuss, Dennis Lee, Ignasi Clavera, Tamim Asfour, and Pieter Abbeel. Promp: Proximal meta-policy search. arXiv:1810.06784, 2018.
Jurgen Schmidhuber. Evolutionary principles in self-referential learning. On learning how to learn: The meta-meta-... hook.) Diploma thesis, Institut f. Informatik, Tech. Univ. Munich, 1987.
Jürgen Schmidhuber, Jieyu Zhao, and Marco Wiering. Shifting inductive bias with success-story algorithm, adaptive levin search, and incremental self-improvement. Machine Learning, 28(1):105–130, Jul 1997. ISSN 1573-0565.
John Schulman, Sergey Levine, Pieter Abbeel, Michael I Jordan, and Philipp Moritz. Trust region policy optimization. In International Conference on Machine Learning, volume 37, pp. 1889–1897, 2015.
David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In International Conference on Machine Learning, 2014.
Adrian Smith. Sequential Monte Carlo methods in practice. Springer Science & Business Media, 2013.
Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pp. 4077–4087, 2017.
Sebastian Thrun. Is learning the n-th thing any easier than learning the first? In Advances in neural information processing systems, pp. 640–646, 1996.
Sebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 2012.
Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033. IEEE, 2012.
Paul E Utgoff. Shift of bias for inductive concept learning. Machine learning: An artificial intelligence approach, 2:107–148, 1986.
Jane X. Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z. Leibo, Rémi Munos, Charles Blundell, Dharshan Kumaran, and Matthew Botvinick. Learning to reinforcement learn. CoRR, abs/1611.05763, 2016. URL http://arxiv.org/abs/1611.05763.
A P S E U D O - C O D E
The pseudo-code for MQL during training and adaption are given in Algorithm 1 and Algorithm 2. After MQL is trained for a given environment as described in Algorithm 1, it returns the meta-trained policy θ and replay buffer containing train tasks.
Next, Algorithm 2 runs the adaptation procedure which adapts the meta-trained policy to a test taskD with few data. To do so, MQL optimizes the adaptation objective into two steps. After gathering data from a test task D, MQL first updates the policy using the new data (line 4). MQL then fits a logistic classifier on a mini-batch of transitions from the meta-training replay buffer and the transitions collected from the test task and then estimates ÊSS (lines 5-6). Finally, the adaptation step runs for n iterations (lines 7 - 10) in which MQL can exploit past data in which it uses propensity score to decide whether or not a given sample is related to the current test task.
Algorithm 1: MQL - Meta-training Input: Set of training tasks Dmeta
1 Initialize the replay buffer 2 Initialize parameters θ of an off-policy method, e.g., TD3 3 while not done do 4 // Rollout and update policy 5 Sample a task D ∼ Dmeta 6 Gather data from task D using policy πθ while feeding transitions through context GRU. Add trajectory to the replay buffer. 7 b ← Sample mini-batch from buffer 8 Update parameters θ using mini-batch b and Eqn. (15)
9 θmeta ← θ 10 return θmeta , replay buffer
Algorithm 2: MQL - Adaptation Input: Test task D, meta-training replay buffer, meta-trained policy θmeta
1 Initialize temporary buffer buf 2 θ ← θmeta 3 buf ← Gather data from D using πθmeta 4 Update Eqn. (18) using buf 5 Fit β(D) using buf and meta-training replay buffer using Eqn. (12)
6 Estimate ÊSS using β(D) using Eqn. (13) 7 for i ≤ n do 8 b ← sample mini-batch from meta-training replay buffer 9 Calculate β for b
10 Update θ using Eqn. (19)
11 Evaluate θ on a new rollout from task D 12 return θ
B O U T- O F - D I S T R I B U T I O N TA S K S
MQL is designed for explicitly using data from the new task along with off-policy data from old, possibly very different tasks. This is on account of two things: (i) the loss function of MQL does not use the old data if it is very different from the new task, β is close to zero for all samples, and (ii) the first term in (18) makes multiple updates using data from the new task. To explore this aspect, we create an out-of-distribution task using the “Half-Cheetah-Vel” environment wherein we use disjoint sets of velocities for meta-training and testing. The setup is as follows:
• Half-Cheetah-Vel-OOD-Medium: target velocity for a training task is sampled uniformly randomly from [0, 2.5] while that for test task is sampled uniformly randomly from [2.5, 3.0].
This is what we call “medium” hardness task because although the distributions of train and test velocities is disjoint, they are close to each other.
• Half-Cheetah-Vel-OOD-Hard: target velocity for a training task is sampled uniformly randomly from [0, 1.5] while that for test task is sampled uniformly randomly from [2.5, 3.0]. This is a “hard” task because the distributions of train and test velocities are far away from each other.
Fig. 6a shows that MQL significantly outperforms PEARL when the train and test target velocities come from disjoint sets. We used the published code of PEARL (Rakelly et al., 2019) for this experiment. This shows that the adaptation in MQL is crucial to generalizing to new situations which are not a part of the meta-training process. Fig. 6b shows the evolution of the proximal penalty coefficient λ and the propensity score β(z) during meta-training for the medium-hard task. We see that λ ≈ 0.8 while β(z) ≈ 0.2 throughout training. This indicates that MQL automatically adjusts its test-time adaptation to use only few samples in (19) if the test task provides transitions quite different than those in the replay buffer.
We next discuss results on the harder task Half-Cheetah-Vel-OOD-Hard. There is a very large gap between training and test target velocities in this case. Fig. 7a shows the comparison with the same test protocol as the other experiments in this paper. In particular, we collect 200 time-steps from the new task and use it for adaptation in both MQL and TD3-context. Since this task is particularly hard, we also ran an experiment where 1200 time-steps (6 episodes) are given to the two algorithms for adaptation. The results are shown in Fig. 7b. In both cases, we see that MQL is better than TD3-context by a large margin (the standard deviation on these plots is high because the environment is hard). Note that since we re-initialize the hidden state of the context network at the beginning of each episode, TD3-context cannot take advantage of the extra time-steps. MQL on the other hand updates the policy explicitly and can take advantage of this extra data.
For sake of being thorough, we collected 800 time-steps from the new task from the same episode, the results are shown in Fig. 8a. We again notice that MQL results in slightly higher rewards than TD3-context in spite of the fact that both the algorithms suffer large degradation in performance as compared to Figs. 7a and 7b.
Figs. 7c, 7d and 8b show that the proximal penalty coefficient λ ≈ 1 and the propensity score β(z) ≈ 0 for a large fraction of training. This proves that MQL is able to automatically discard samples unrelated to the new test during the adaptation phase.
C M O R E A B L AT I O N S T U D I E S
We conduct a series of additional ablation studies to analyze the different components of the MQL algorithm. We use two environments for this purpose, namely Half-Cheetah-Vel and Walker-2DParams. Fig. 9 and Fig. 10 show the result of these experiments. These experiments show that adaptation phase is more useful for Half-Cheetah-Vel than Walker-2D-Params as test and training tasks are very similar in Walker-2D-Params which helps TD3-context achieves strong performance that leaves no window for improvement with adaptation.
D H Y P E R - PA R A M E T E R S A N D M O R E D E TA I L S O F T H E E M P I R I C A L R E S U LT S | 1. What is the main contribution of the paper regarding Meta Q-Learning?
2. What are the strengths and weaknesses of the proposed approach compared to other state-of-the-art methods like PEARL?
3. How does the paper highlight the importance of context in meta-learning and fast adaptation?
4. Can the authors provide more details about how they generate context and whether the GRU resets on episode boundaries or not?
5. How does the result of the paper reconcile with Figure 7 of PEARL, which shows stochasticity is paramount for performing structured exploration during adaptation?
6. Does the ablative analysis suggest that regularization via fine-tuning and context are sufficient to solve these meta-RL tasks, and what implications does this have for the proposed method?
7. Could the authors repeat the ablative analysis on all six tasks to provide a more comprehensive understanding of the effectiveness of the proposed method?
8. Can the authors provide a clearer description of the proposed MQL algorithm, including how losses are interleaved, and whether pretraining \theta_meta using the multi-task loss before adaptation is done?
9. Would evaluating the technique for off-policy training using a discriminator to estimate a likelihood ratio in a standard off-policy learning setting be worthwhile? | Review | Review
This paper proposes Meta Q-Learning (MQL), an algorithm for efficient off-policy meta-learning. The method relies on a simple multi-task objective which provides initial parameter values for the adaptation phase. Adaptation is performed by gradient descent, minimizing TD-error on the new validation task (regularizing towards initial parameter values). To make adaptation data efficient, the method makes heavy use of off-policy data generated during meta-training, by minimizing its importance weighted TD-error. Importance weights are estimated via a likelihood ratio estimator, and are also used to derive the effective sample size of the meta-training batch, which is used to adaptively weight the regularization term. Intuitively, this has the effect of turning off regularization when meta-training trajectories are “close” to validation trajectories. One important but somewhat orthogonal contribution of the paper is to highlight the importance of context in meta-learning and fast adaptation. Concretely, the authors show that a simple actor-critic algorithm (TD3), whose policy and value are conditioned on a context variable derived from a recurrent network performs surprisingly well in comparison to SoTA meta-learning algorithms like PEARL. MQL is evaluated on benchmark meta-RL environments from continuous control tasks and is shown to perform competitively with PEARL.
I have mixed opinions on this paper. On the positive side, and subject to further clarifications (see below), the paper seems to confirm that multi-task learning is almost sufficient to solve current meta-RL benchmarks in continuous control, without adaptation, as long as policy and critic are conditioned on a recurrent task context. This either highlights the strength of multi-task learning, or the inadequacies of current meta-RL benchmarks: either of which will be of interest to the community. On the other hand, the proposed MQL algorithm is only shown to significantly outperform this new baseline TD3-context agent on 1 of 6 tasks (Ant-Goal-2D), and furthermore the ablative analysis seems to suggest that the importance weighting and adaptive weighting of trust region are not very effective, and do not significantly change the performance of the method. The take-away seems to be that while context is crucial to generalization on these validation tasks, adaptation is not but can indeed be improved in a data-efficient manner with fine-tuning. MQL is only then a second thread to this story.
Clarifying Questions:
* The text and figures could be much clearer with respect to what is being measured/presented. Can you confirm that you report average validation returns as a function of meta-training steps, where validation performance is estimated after training on (at most) 2x200 transitions from the validation task (|D_new|=400)? This would match the protocol from PEARL. Fig 4(a) would then imply that one can get close to SoTA results on Ant-Fwd-Back + Half-Cheetah-Fwd-Back with no adaptation whatsoever (using TD3-context).
* Could the authors provide more details about how context is generated, in particular whether the GRU is reset on episode boundaries or not? If the recurrent context does span episode boundaries, are rewards being maximizing over a horizon greater than the episode length (similar to RL2)?
* How do you reconcile your result that a deterministic encoder is sufficient, compared to Figure 7 of PEARL which shows stochasticity is paramount for performing structured exploration during adaptation?
* Ablative analysis seems to suggest that off-policy learning plays a very minimal role during adaptation (beta=0). Can you confirm this interpretation? Would this not suggest that regularization via fine-tuning (regularized to multi-task prior) and context are sufficient to solve these meta-RL tasks? This would be a sufficient contribution by itself but unfortunately does little to validate the proposed method.
* Could you repeat the ablative analysis on all of the 6 tasks? Currently, this is performed on the two tasks for which TD3-context does best which leaves little room for improvement.
* Section 4.4. Propensity Score Estimation. “Use this model to sample transitions from the replay buffer that are similar to the new task”. Is this done via rejection sampling? This should be described more prominently in Section 3, along with a detailed description of the proposed MQL algorithm.
Detailed Comments:
* Paper desperately needs an algorithmic box, which clear and concise description of algorithm (how losses are interleaved, etc). Importantly, do the authors pretrain \theta_meta using the multi-task loss before doing adapation?
* Please add more informative labels to axes for all your figures: timesteps for [validation,training] ? Same for return. Please also augment captions to make figures as stand-alone as possible.
* MQL encompasses technique for off-policy training using a discriminator to estimate a likelihood ratio. It would be nice to evaluate this in standard off-policy learning setting, instead of it being limited to meta-learning. |
ICLR | Title
Backdoor Defense via Decoupling the Training Process
Abstract
Recent studies have revealed that deep neural networks (DNNs) are vulnerable to backdoor attacks, where attackers embed hidden backdoors in the DNN model by poisoning a few training samples. The attacked model behaves normally on benign samples, whereas its prediction will be maliciously changed when the backdoor is activated. We reveal that poisoned samples tend to cluster together in the feature space of the attacked DNN model, which is mostly due to the endto-end supervised training paradigm. Inspired by this observation, we propose a novel backdoor defense via decoupling the original end-to-end training process into three stages. Specifically, we first learn the backbone of a DNN model via self-supervised learning based on training samples without their labels. The learned backbone will map samples with the same ground-truth label to similar locations in the feature space. Then, we freeze the parameters of the learned backbone and train the remaining fully connected layers via standard training with all (labeled) training samples. Lastly, to further alleviate side-effects of poisoned samples in the second stage, we remove labels of some ‘low-credible’ samples determined based on the learned model and conduct a semi-supervised fine-tuning of the whole model. Extensive experiments on multiple benchmark datasets and DNN models verify that the proposed defense is effective in reducing backdoor threats while preserving high accuracy in predicting benign samples. Our code is available at https://github.com/SCLBD/DBD.
1 INTRODUCTION
Deep learning, especially deep neural networks (DNNs), has been widely adopted in many realms (Wang et al., 2020b; Li et al., 2020a; Wen et al., 2020) for its high effectiveness. In general, the training of DNNs requires a large amount of training samples and computational resources. Accordingly, third-party resources (e.g., third-party data or servers) are usually involved. While the opacity of the training process brings certain convenience, it also introduces new security threats.
Backdoor attack poses a new security threat to the training process of DNNs (Li et al., 2020c). It maliciously manipulates the prediction of the attacked DNNs by poisoning a few training samples. Specifically, backdoor attackers inject the backdoor trigger (i.e., a particular pattern) to some benign training images and change their labels with the attacker-specified target label. The connection between the backdoor trigger and the target label will be learned by DNNs during the training process. In the inference process, the prediction of attacked DNNs will be changed to the target label when the trigger is present, whereas the attacked DNNs will behave normally on benign samples. As such, users are difficult to realize the existence of hidden backdoors and therefore this attack is a serious threat to the practical applications of DNNs.
In this paper, we first investigate backdoor attacks from the hidden feature space. Our preliminary experiments reveal that the backdoor is embedded in the feature space, i.e., samples with the back-
∗The first two authors contributed equally to this work. This work was mostly done when Kunzhe Huang and Yiming Li were the research interns at The Chinese University of Hong Kong, Shenzhen. † indicates corresponding authors: Baoyuan Wu (wubaoyuan@cuhk.edu.cn) and Zhan Qin (qinzhan@zju.edu.cn).
door trigger (dubbed poisoned samples) tend to cluster together in the feature space. We reveal that this phenomenon is mostly due to the end-to-end supervised training paradigm. Specifically, the excessive learning capability allows DNNs to learn features about the backdoor trigger, while the DNNs can shrink the distance between poisoned samples in the feature space and connect the learned trigger-related features with the target label by the end-to-end supervised training. Based on this understanding, we propose to decouple the end-to-end training process for the backdoor defense. Specifically, we treat the DNNs as two disjoint parts, including a feature extractor (i.e., backbone) and a simple classifier (i.e., the remaining fully connected layers). We first learn the purified feature extractor via self-supervised learning (Kolesnikov et al., 2019; Chen et al., 2020a; Jing & Tian, 2020) with unlabeled training samples (obtained by removing their labels), and then learn the simple classifier via standard supervised training process based on the learned feature extractor and all training samples. The strong data augmentations involved in the self-supervised learning damage trigger patterns, making them unlearnable during representation learning; and the decoupling process further disconnects trigger patterns and the target label. Accordingly, hidden backdoors cannot be successfully created even the model is trained on the poisoned dataset based on our defense.
Moreover, we further reveal that the representation of poisoned samples generated by the purified extractor is significantly different from those generated by the extractor learned with standard training process. Specifically, the poisoned sample lies closely to samples with its ground-truth label instead of the target label. This phenomenon makes the training of the simple classifier similar to label-noise learning (Wang et al., 2019b; Ma et al., 2020; Berthon et al., 2021). As such, we first filter high-credible training samples (i.e., training samples that are most probably to be benign) and then use those samples as labeled samples and the remaining part to form unlabeled samples to fine-tune the whole model via semi-supervised learning (Rasmus et al., 2015; Berthelot et al., 2019; Sohn et al., 2020). This approach is to further reduce the adverse effects of poisoned samples.
The main contributions of this paper are three-fold. (1) We reveal that the backdoor is embedded in the feature space, which is mostly due to the end-to-end supervised training paradigm. (2) Based on our understanding, we propose a decoupling-based backdoor defense (DBD) to alleviate the threat of poisoning-based backdoor attacks. (3) Experiments on classical benchmark datasets are conducted, which verify the effectiveness of our defense.
2 RELATED WORK
2.1 BACKDOOR ATTACK
Backdoor attack is an emerging research area, which raises security concerns about training with third-party resources. In this paper, we focus on the poisoning-based backdoor attack towards image classification, where attackers can only modify the dataset instead of other training components (e.g., training loss). This threat could also happen in other tasks (Xiang et al., 2021; Zhai et al., 2021; Li et al., 2022) and with different attacker’s capacities (Nguyen & Tran, 2020; Tang et al., 2020; Zeng et al., 2021a), which are out-of-scope of this paper. In general, existing attacks can be divided into two main categories based on the property of target labels, as follows:
Poison-Label Backdoor Attack. It is currently the most common attack paradigm, where the target label is different from the ground-truth label of poisoned samples. BadNets (Gu et al., 2019) is the first and most representative poison-label attack. Specifically, it randomly selected a few samples from the original benign dataset to generate poisoned samples by stamping the backdoor trigger onto the (benign) image and change their label with an attacker-specified target label. Those generated poisoned samples associated with remaining benign ones were combined to form the poisoned training dataset, which will be delivered to users. After that, (Chen et al., 2017) suggested that the poisoned image should be similar to its benign version for the stealthiness, based on which they proposed the blended attack. Recently, (Xue et al., 2020; Li et al., 2020b; 2021c) further explored how to conduct poison-label backdoor attacks more stealthily. Most recently, a more stealthy and effective attack, the WaNet (Nguyen & Tran, 2021), was proposed. WaNet adopted image warping as the backdoor trigger, which deforms but preserves the image content.
Clean-Label Backdoor Attack. Although the poisoned image generated by poison-label attacks could be similar to its benign version, users may still notice the attack by examining the image-label relationship. To address this problem, Turner et al. (2019) proposed the clean-label attack paradigm, where the target label is consistent with the ground-truth label of poisoned samples. Specifically,
they first leveraged adversarial perturbations or generative models to modify some benign images from the target class and then conducted the standard trigger injection process. This idea was generalized to attack video classification in (Zhao et al., 2020b), where they adopted the targeted universal adversarial perturbation (Moosavi-Dezfooli et al., 2017) as the trigger pattern. Although clean-label backdoor attacks are more stealthy compared with poison-label ones, they usually suffer from relatively poor performance and may even fail in creating backdoors (Li et al., 2020c).
2.2 BACKDOOR DEFENSE
Currently, there are also some approaches to alleviate the backdoor threat. Existing defenses are mostly empirical, which can be divided into five main categories, including (1) detection-based defenses (Xu et al., 2021; Zeng et al., 2021a; Xiang et al., 2022), (2) preprocessing based defenses (Doan et al., 2020; Li et al., 2021b; Zeng et al., 2021b), (3) model reconstruction based defenses (Zhao et al., 2020a; Li et al., 2021a; Zeng et al., 2022), (4) trigger synthesis based defenses (Guo et al., 2020; Dong et al., 2021; Shen et al., 2021), and (5) poison suppression based defenses (Du et al., 2020; Borgnia et al., 2021). Specifically, detection-based defenses examine whether a suspicious DNN or sample is attacked and it will deny the use of malicious objects; Preprocessing based methods intend to damage trigger patterns contained in attack samples to prevent backdoor activation by introducing a preprocessing module before feeding images into DNNs; Model reconstruction based ones aim at removing the hidden backdoor in DNNs by modifying models directly; The fourth type of defenses synthesize potential trigger patterns at first, following by the second stage that the hidden backdoor is eliminated by suppressing their effects; The last type of methods depress the effectiveness of poisoned samples during the training process to prevent the creation of hidden backdoors. In general, our method is most relevant to this type of defenses.
In this paper, we only focus on the last four types of defenses since they directly improve the robustness of DNNs. Besides, there were also few works focusing on certified backdoor defenses (Wang et al., 2020a; Weber et al., 2020). Their robustness is theoretically guaranteed under certain assumptions, which cause these methods to be generally weaker than empirical ones in practice.
2.3 SEMI-SUPERVISED AND SELF-SUPERVISED LEARNING
Semi-supervised Learning. In many real-world applications, the acquisition of labeled data often relies on manual labeling, which is very expensive. In contrast, obtaining unlabeled samples is much easier. To utilize the power of unlabeled samples with labeled ones simultaneously, a great amount of semi-supervised learning methods were proposed (Gao et al., 2017; Berthelot et al., 2019; Van Engelen & Hoos, 2020). Recently, semi-supervised learning was also introduced in improving the security of DNNs (Stanforth et al., 2019; Carmon et al., 2019), where they utilized unlabelled samples in the adversarial training. Most recently, (Yan et al., 2021) discussed how to backdoor semi-supervised learning. However, this approach needs to control other training components (e.g., training loss) in addition to modifying training samples and therefore is out-of-scope of this paper. How to adopt semi-supervised learning for backdoor defense remains blank.
Self-supervised Learning. This learning paradigm is a subset of unsupervised learning, where DNNs are trained with supervised signals generated from the data itself (Chen et al., 2020a; Grill et al., 2020; Liu et al., 2021). It has been adopted for increasing adversarial robustness (Hendrycks et al., 2019; Wu et al., 2021; Shi et al., 2021). Most recently, there were also a few works (Saha et al., 2021; Carlini & Terzis, 2021; Jia et al., 2021) exploring how to backdoor self-supervised learning. However, these attacks are out-of-scope of this paper since they need to control other training components (e.g., training loss) in addition to modifying training samples.
3 REVISITING BACKDOOR ATTACKS FROM THE HIDDEN FEATURE SPACE
In this section, we analyze the behavior of poisoned samples from the hidden feature space of attacked models and discuss its inherent mechanism.
Settings. We conduct the BadNets (Gu et al., 2019) and label-consistent attack (Turner et al., 2019) on CIFAR-10 dataset (Krizhevsky, 2009) for the discussion. They are representative of poison-label attacks and clean-label attacks, respectively. Specifically, we conduct supervised learning on the poisoned datasets with the standard training process and self-supervised learning on the unlabelled
poisoned datasets with SimCLR (Chen et al., 2020a). We visualize poisoned samples in the hidden feature space generated by attacked DNNs based on the t-SNE (Van der Maaten & Hinton, 2008). More detailed settings are presented in Appendix A.
Results. As shown in Figure 1(a)-1(b), poisoned samples (denoted by ‘black-cross’) tend to cluster together to form a separate cluster after the standard supervised training process, no matter under the poison-label attack or clean-label attack. This phenomenon implies why existing poisoning-based backdoor attacks can succeed. Specifically, the excessive learning capability allows DNNs to learn features about the backdoor trigger. Associated with the end-to-end supervised training paradigm, DNNs can shrink the distance between poisoned samples in the feature space and connect the learned trigger-related features with the target label. In contrast, as shown in Figure 1(c)-1(d), poisoned samples lie closely to samples with their ground-truth label after the self-supervised training process on the unlabelled poisoned dataset. It indicates that we can prevent the creation of backdoors by self-supervised learning, which will be further introduced in the next section.
4 DECOUPLING-BASED BACKDOOR DEFENSE
4.1 PRELIMINARIES
General Pipeline of Backdoor Attacks. Let D = {(xi, yi)}Ni=1 denotes the benign training set, where xi ∈ X = {0, 1, . . . , 255}C×W×H is the image, yi ∈ Y = {0, 1, . . . ,K} is its label,K is the number of classes, and yt ∈ Y indicates the target label. How to generate the poisoned datasetDp is the cornerstone of backdoor attacks. Specifically,Dp consists of two subsets, including the modified version of a subset of D and remaining benign samples, i.e., Dp = Dm ∪ Db, where Db ⊂ D, γ , |Dm||D| is the poisoning rate, Dm = {(x
′, yt)|x′ = G(x), (x, y) ∈ D\Db}, and G : X → X is an attacker-predefined poisoned image generator. For example, G(x) = (1−λ)⊗x+λ⊗ t, where λ ∈ [0, 1]C×W×H , t ∈ X is the trigger pattern, and ⊗ is the element-wise product in the blended attack (Chen et al., 2017). Once Dp is generated, it will be sent to users who will train DNNs on it. Hidden backdoors will be created after the training process.
Threat Model. In this paper, we focus on defending against poisoning-based backdoor attacks. The attacker can arbitrarily modify the training set whereas cannot change other training components (e.g., model structure and training loss). For our proposed defense, we assume that defenders can fully control the training process. This is the scenario that users adopt third-party collected samples for training. Note that we do not assume that defenders have a local benign dataset, which is often required in many existing defenses (Wang et al., 2019a; Zhao et al., 2020a; Li et al., 2021a).
Defender’s Goals. The defender’s goals are to prevent the trained DNN model from predicting poisoned samples as the target label and to preserve the high accuracy on benign samples.
4.2 OVERVIEW OF THE DEFENSE PIPELINE
In this section, we describe the general pipeline of our defense. As shown in Figure 2, it consists of three main stages, including (1) learning a purified feature extractor via self-supervised learning, (2) filtering high-credible samples via label-noise learning, and (3) semi-supervised fine-tuning.
Specifically, in the first stage, we remove the label of all training samples to form the unlabelled dataset, based on which to train the feature extractor via self-supervised learning. In the second stage, we freeze the learned feature extractor and adopt all training samples to train the remaining fully connected layers via supervised learning. We then filter α% high-credible samples based on the training loss. The smaller the loss, the more credible the sample. After the second stage, the training set will be separated into two disjoint parts, including high-credible samples and lowcredible samples. We use high-credible samples as labeled samples and remove the label of all low-credible samples to fine-tune the whole model via semi-supervised learning. More detailed information about each stage of our method will be further illustrated in following sections.
4.3 LEARNING PURIFIED FEATURE EXTRACTOR VIA SELF-SUPERVISED LEARNING
Let Dt denotes the training set and fw : X → [0, 1]K indicates the DNN with parameter w = [wc,wf ], wherewc andwf indicates the parameters of the backbone and the fully connected layer, respectively. In this stage, we optimizewc based on the unlabeled version of Dt via self-supervised learning, as follows:
w∗c = arg min wc ∑ (x,y)∈Dt L1(x;wc), (1)
where L1(·) indicates the self-supervised loss (e.g., NT-Xent in SimCLR (Chen et al., 2020a)). Through the self-supervised learning, the learned feature extractor (i.e., backbone) will be purified even if the training set contains poisoned samples, as illustrated in Section 3.
4.4 FILTERING HIGH-CREDIBLE SAMPLES VIA LABEL-NOISE LEARNING
Once w∗c is obtained, the user can freeze it and adopt Dt to further optimize remaining wf , i.e.,
w∗f = arg min wf ∑ (x,y)∈Dt L2 ( f[w∗c ,wf ](x), y ) , (2)
where L2(·) indicates the supervised loss (e.g., cross entropy). After the decoupling-based training process (1)-(2), even if the model is (partly) trained on the poisoned dataset, the hidden backdoor cannot be created since the feature extractor is purified. However, this simple strategy suffers from two main problems. Firstly, compared with the one trained via supervised learning, the accuracy of predicting benign samples will have a certain decrease, since the learned feature extractor is frozen in the second stage. Secondly, poisoned samples will serve as ‘outliers’ to further hinder the learning of the second stage when poison-label attacks appear, since those samples lie close to samples with its ground-truth label instead of the target label in the hidden feature space generated by the learned purified feature extractor. These two problems indicate that we should remove poisoned samples and retrain or fine-tune the whole model.
Specifically, we select high-credible samples Dh based on the loss L2(·; [w∗c ,w∗f ]). The highcredible samples are defined as the α% training samples with the smallest loss, where α ∈ [0, 100] is
a hyper-parameter. In particular, we adopt the symmetric cross-entropy (SCE) (Wang et al., 2019b) as L2(·), inspired by the label-noise learning. As shown in Figure 3, compared with the CE loss, the SCE can significantly increase the differences between poisoned samples and benign ones, which further reduces the possibility that high-credible dataset Dh still contains poisoned samples. Note that we do not intend to accurately separate poisoned samples and benign samples. We only want to ensure that the obtained Dh contains as few poisoned samples as possible.
4.5 SEMI-SUPERVISED FINE-TUNING
After the second stage, the third-party training setDt will be separated into two disjoint parts, including the high-credible dataset Dh and the low-credible dataset Dl , Dt\Dh. Let D̂l , {x|(x, y) ∈ Dl} indicates the unlabeled version of low-credible dataset Dl. We fine-tune the whole trained model f[w∗c ,w∗f ](·) with semi-supervised learning as follows:
min w L3(Dh, D̂l;w), (3)
where L3(·) denotes the semi-supervised loss (e.g., the loss in MixMatch (Berthelot et al., 2019)). This process can prevent the side-effects of poisoned samples while utilizing their contained useful information, and encourage the compatibility between the feature extractor and the simple classifier via learning them jointly instead of separately. Please refer to Section 5.3 for more results.
5 EXPERIMENTS
5.1 EXPERIMENTAL SETTINGS
Datasets and DNNs. We evaluate all defenses on two classical benchmark datasets, including CIFAR-10 (Krizhevsky, 2009) and (a subset of) ImageNet (Deng et al., 2009). We adopt the ResNet18 (He et al., 2016) for these tasks. More detailed settings are presented in Appendix B.1. Besides, we also provide the results on (a subset of) VGGFace2 (Cao et al., 2018) in Appendix C.
Attack Baselines. We examine all defense approaches in defending against four representative attacks. Specifically, we select the BadNets (Gu et al., 2019), the backdoor attack with blended strategy (dubbed ‘Blended’) (Chen et al., 2017), WaNet (Nguyen & Tran, 2021), and label-consistent attack with adversarial perturbations (dubbed ‘Label-Consistent’) (Turner et al., 2019) for the evaluation. They are the representative of patch-based visible and invisible poison-label attacks, nonpatch-based poison-label attacks, and clean-label attacks, respectively.
Defense Baselines. We compared our DBD with two defenses having the same defender’s capacities, including the DPSGD (Du et al., 2020) and ShrinkPad (Li et al., 2021b). We also compare with other two approaches with an additional requirement (i.e., having a local benign dataset), including
the neural cleanse with unlearning strategy (dubbed ‘NC’) (Wang et al., 2019a), and neural attention distillation (dubbed ‘NAD’) (Li et al., 2021a). They are the representative of poison suppression based defenses, preprocessing based defenses, trigger synthesis based defenses, and model reconstruction based defenses, respectively. We also provide results of DNNs trained without any defense (dubbed ‘No Defense’) as another important baseline for reference.
Attack Setups. We use a 2 × 2 square as the trigger pattern on CIFAR-10 dataset and the 32 × 32 Apple logo on ImageNet dataset for the BadNets, as suggested in (Gu et al., 2019; Wang et al., 2019a). For Blended, we adopt the ‘Hello Kitty’ pattern on CIFAR-10 and the random noise pattern on ImageNet, based on the suggestions in (Chen et al., 2017), and set the blended ratio λ = 0.1 on all datasets. The trigger pattern adopted in label-consistent attack is the same as the one used in BadNets. For WaNet, we adopt its default settings on CIFAR-10 dataset. However, on ImageNet dataset, we use different settings optimized by grid-search since the original ones fail. An example of poisoned samples generated by different attacks is shown in Figure 4. Besides, we set the poisoning rate γ1 = 2.5% for label-consistent attack (25% of training samples with the target label) and γ2 = 5% for three other attacks. More details are shown in Appendix B.2.
Defense Setups. For our DBD, we adopt SimCLR (Chen et al., 2020a) as the self-supervised method and MixMatch (Berthelot et al., 2019) as the semi-supervised method. More details about SimCLR and MixMatch are in Appendix I. The filtering rate α is the only key hyper-parameter in DBD, which is set to 50% in all cases. We set the shrinking rate to 10% for the ShrinkPad on all datasets, as suggested in (Li et al., 2021b; Zeng et al., 2021b). In particular, DPSGD and NAD are sensitive to their hyper-parameters. We report their best results in each case based on the grid-search (as shown in Appendix D). Besides, we split a 5% random subset of the benign training set as the local benign dataset for NC and NAD. More implementation details are provided in Appendix B.3.
Evaluation Metrics. We adopt the attack success rate (ASR) and benign accuracy (BA) to measure the effectiveness of all methods1. Specifically, let Dtest indicates the (benign) testing set and Cw : X → Y denotes the trained classifier, we have ASR , Pr(x,y)∈Dtest{Cw(G(x)) = yt|y 6= yt} and BA , Pr(x,y)∈Dtest{Cw(x) = y}, where yt is the target label and G(·) is the poisoned image generator. In particular, the lower the ASR and the higher the BA, the better the defense.
5.2 MAIN RESULTS
Comparing DBD with Defenses having the Same Requirements. As shown in Table 1-2, DBD is significantly better than defenses having the same requirements (i.e., DPSGD and ShrinkPad) in defending against all attacks. For example, the benign accuracy of DBD is 20% over while the attack success rate is 5% less than that of DPSGD in all cases. Specifically, the attack success rate of models with DBD is less than 2% in all cases (mostly < 0.5%), which verifies that our method can successfully prevent the creation of hidden backdoors. Moreover, the decreases of benign accuracy are less than 2% when defending against poison-label attacks, compared with models trained without any defense. Our method is even better on relatively larger dataset where all baseline methods become less effective. These results verify the effectiveness of our method.
1Among all defense methods, the one with the best performance is indicated in boldface and the value with underline denotes the second-best result.
Comparing DBD with Defenses having Extra Requirements. We also compare our defense with two other methods (i.e., NC and NAD), which have an additional requirement that defenders have a benign local dataset. As shown in Table 1-2, NC and NAD are better than DPSGD and ShrinkPad, as we expected, since they adopt additional information from the benign local dataset. In particular, although NAD and NC use additional information, our method is still better than them, even when their performances are tuned to the best while our method only uses the default settings. Specifically, the BA of NC is on par with that of our method. However, it is with the sacrifice of ASR. Especially on ImageNet dataset, NC has limited effects in reducing ASR. In contrast, our method reaches the smallest ASR while its BA is either the highest or the second-highest in almost all cases. These results verify the effectiveness of our method again.
Results. As shown in Figure 7, our method can still prevent the creation of hidden backdoors even when the poisoning rate reaches 20%. Besides, DBD also maintains high benign accuracy. In other words, our method is effective in defending attacks with different strengths.
5.3 ABLATION STUDY
There are four key strategies in DBD, including (1) obtaining purified feature extractor, (2) using SCE instead of CE in the second stage, (3) reducing side-effects of low-credible samples, and (4) fine-tuning the whole model via semi-supervised learning. Here we verify their effectiveness.
Settings. We compare the proposed DBD with its four variants, including (1) DBD without SS, (2) SS with CE, (3) SS with SCE, and (4) SS with SCE + Tuning, on the CIFAR-10 dataset. Specifically, in the first variant, we replace the backbone generated by self-supervised learning with the one trained in a supervised fashion and keep other parts unchanged. In the second variant, we freeze the backbone learned via self-supervised learning and train the remaining fully-connected layers with cross-entropy loss on all training samples. The third variant is similar to the second one. The only difference is that it uses symmetric cross-entropy instead of cross-entropy to train fully-connected layers. The last variant is an advanced version of the third one, which further fine-tunes fullyconnected layers on high-credible samples filtered by the third variant.
Results. As shown in Table 3, we can conclude that decoupling the original end-to-end supervised training process is effective in preventing the creation of hidden backdoors, by comparing our DBD with its first variant and the model trained without any defense. Besides, we can also verify the effectiveness of SCE loss on defending against poison-label backdoor attacks by comparing the second and third DBD variants. Moreover, the fourth DBD variant has relatively lower ASR and BA, compared with the third one. This phenomenon is due to the removal of low-credible samples. It indicates that reducing side-effects of low-credible samples while adopting their useful information is important for the defense. We can also verify that fine-tuning the whole model via semi-supervised learning is also useful by comparing the fourth variant and the proposed DBD.
5.4 RESISTANCE TO POTENTIAL ADAPTIVE ATTACKS
In our paper, we adopted the classical defense setting that attackers have no information about the defense. Attackers may design adaptive attacks if they know the existence of our DBD. The most straightforward idea is to manipulate the self-supervised training process so that poisoned samples are still in a new cluster after the self-supervised learning. However, attackers are not allowed to do it based on our threat model about adopting third-party datasets. Despite this, attackers may design adaptive attacks by optimizing the trigger pattern to make poisoned samples still in a new cluster after the self-supervised learning if they can know the model structure used by defenders, as follows:
Problem Formulation. For a K-classification problem, let X ′ = {xi}Mi=1 indicates the benign images selected for poisoning, Xj = {xi} Nj i=1 denotes the benign images with ground-truth label j, and g is a trained backbone. Given an attacker-predefined poisoned image generator G, the adaptive attack aims to optimize a trigger pattern t by minimizing the distance between poisoned images while maximizing the distance between the center of poisoned images and centers of clusters of benign images with different label, i.e.,
min t
1
M ∑ x∈X ′ d (g(G(x; t)), g′))− 1 K K∑ i=1 d (g′, gi) , (4)
where g′ , 1M ∑ x∈X ′ g(G(x; t)), gi , 1 Ni ∑ x∈Xi g(x), and d is a distance metric.
Settings. We adopt the CIFAR-10 dataset and use the `2 norm as the distance metric to conduct the experiment. Specifically, we assume that attackers have the entire benign dataset, based on which they can train a backbone adopted in the first stage of our DBD. We use the Adam optimizer to solve the above optimization problem for 100 epochs with a learning rate of 0.1. The trigger size is set to 32×32, which means the attacker can completely modify the content of poisoned samples, regardless of its original semantic information and the stealthiness of the attack. This setting is to ensure the attack ability, since clustering poisoned samples together is very difficult in self-supervised learning.
Results. The adaptive attack works well when there is no defense (BA=94.96%, ASR=99.70%). However, this attack still fails to attack our DBD (BA=93.21%, ASR=1.02%). In other words, our defense is resistant to this adaptive attack. It is most probably because the trigger optimized based on the backbone is far less effective when the model is retrained since model parameters are changed due to the random initialization and the update of model weights during the training process.
6 CONCLUSION
The mechanism of poisoning-based backdoor attacks is to establish a latent connection between trigger patterns and the target label during the training process. In this paper, we revealed that this connection is learned mostly due to the end-to-end supervised training paradigm. Motivated by this understanding, we proposed a decoupling-based backdoor defense, which first learns the backbone via self-supervised learning and then the remaining fully-connected layers by the classical supervised learning. We also introduced the label-noise learning method to determine high-credible and low-credible samples, based on which we fine-tuned the whole model via semi-supervised learning. Extensive experiments verify that our defense is effective on reducing backdoor threats while preserving high accuracy on predicting benign samples.
ACKNOWLEDGMENTS
Baoyuan Wu is supported in part by the National Natural Science Foundation of China under Grant 62076213, the University Development Fund of the Chinese University of Hong Kong, Shenzhen under Grant 01001810, and the Special Project Fund of Shenzhen Research Institute of Big Data under Grant T00120210003. Zhan Qin is supported in part by the National Natural Science Foundation of China under Grant U20A20178, the National Key Research and Development Program of China under Grant 2020AAA0107705, and the Research Laboratory for Data Security and Privacy, Zhejiang University-Ant Financial Fintech Center. Kui Ren is supported by the National Key Research and Development Program of China under Grant 2020AAA0107705.
ETHICS STATEMENT
DNNs are widely adopted in many mission-critical areas (e.g., face recognition) and therefore their security is of great significance. The vulnerability of DNNs to backdoor attacks raises serious concerns about using third-party training resources. In this paper, we propose a general training pipeline to obtain backdoor-free DNNs, even if the training dataset contains poisoned samples. This work has no ethical issues in general since our method is purely defensive and does not reveal any new vulnerabilities of DNNs. However, we need to mention that our defense can be adopted only when training with untrusted samples, and backdoor attacks could happen in other scenarios. People should not be too optimistic about eliminating backdoor threats.
REPRODUCIBILITY STATEMENT
The detailed descriptions of datasets, models, and training settings are in Appendix A-D. We also describe the computational facilities and cost in Appendix J-K. Codes of our DBD are also opensourced.
A DETAILED SETTINGS FOR REVISITING BACKDOOR ATTACKS
Attack Setups. We conduct the BadNets (Gu et al., 2019) and label-consistent attack (Turner et al., 2019) with the target label yt = 3 on the CIFAR-10 dataset (Krizhevsky, 2009). The trigger patterns are the same as those presented in Section 5.2. In particular, we implement the label-consistent attack with adversarial perturbations, as suggested in its original paper (Turner et al., 2019). Specifically, we used the projected gradient descent (PGD) (Madry et al., 2018) to generate adversarial perturbations within the `∞-ball where the maximum perturbation size = 16.
Training Setups. We conduct supervised learning on the poisoned datasets with the standard training process and the self-supervised learning on the unlabelled poisoned datasets with the SimCLR (Chen et al., 2020a). The supervised training is conducted based on the open-source code2. Specifically, we use the SGD optimizer with momentum 0.9, weight decay of 5 × 10−4, and an initial learning rate of 0.1. The batch size is set to 128 and we train the ResNet-18 model 200 epochs. The learning rate is decreased by a factor of 10 at epoch 100 and 150, respectively. Besides, we add triggers before performing the data augmentation (e.g., random crop and horizontal flipping). For the self-supervised training, we use the stochastic gradient descent (SGD) optimizer with a momentum of 0.9, an initial learning rate of 0.4, and a weight decay factor of 5 × 10−4. We use a batch size of 512, and train the backbone for 1,000 epochs. We decay the learning rate with the cosine decay schedule (Loshchilov & Hutter, 2016) without a restart. Besides, we also adopt strong data augmentation techniques, including random crop and resize (with random flip), color distortions, and Gaussian blur, as suggested in (Chen et al., 2020a). All models are trained until converge.
t-SNE Visualization Settings. We treat the output of the last residual unit as the feature representation and use the tsne-cuda library (Chan et al., 2019) to get the feature embedding of all samples. To have a better visualization, we adopt all poisoned samples and randomly select 10% benign samples for visualizing models under the supervised learning, and adopt 30% poisoned samples and 10% benign samples for those under the self-supervised learning.
B DETAILED SETTINGS FOR MAIN EXPERIMENTS
B.1 MORE DETAILS ABOUT DATASETS AND DNNS
Due to the limitations of computational resources and time, we adopt a subset randomly selected from the original ImageNet. More detailed information about the datasets and DNNs adopted in the main experiments of our paper is presented in Table 4.
B.2 MORE DETAILS ABOUT ATTACK SETTINGS
Attack Setups. We conduct the BadNets (Gu et al., 2019), blended attack (dubbed ‘Blended’) (Chen et al., 2017), label-consistent attack (dubbed ‘Label-Consistent’) (Turner et al., 2019), and WaNet (Nguyen & Tran, 2021) with the target label yt = 3 on all datasets. The trigger patterns are the same as those presented in Section 5.2. In particular, we set the blended ratio λ = 0.1 for the blended attack on all datasets and examine label-consistent attack with the maximum perturbation size ∈ {16, 32}. Besides, WaNet assumed that attackers can fully control the whole training process in its original paper. However, we found that WaNet only modified training data while other training components (e.g., training loss, training schedule, and model structure) are the same as those used in the standard training process. As such, we re-implement its code in the poisoningbased attack scenario based on its official code3. Specifically, following the settings in its original paper, we set the noise rate ρn = 0.2, control grid size k = 4, and warping strength s = 0.5 on
2https://github.com/kuangliu/pytorch-cifar 3https://github.com/VinAIResearch/Warping-based_Backdoor_Attack-release
the CIFAR-10 dataset. However, we found that the default k and s are too small to make the attack works on the ImageNet dataset (as shown in Table 5-6). Besides, the ‘noise mode’ also significantly reduces the attack effectiveness (as shown in Table 7). As such, we set k = 224 and s = 1 and train models without the noise mode on the ImageNet dataset.
Training Setups. On the CIFAR-10 dataset (Krizhevsky, 2009), the settings are the same as those described in Section A; On the ImageNet dataset (Deng et al., 2009), we conduct experiments based on the open-source code4. Specifically, we use the SGD optimizer with momentum 0.9, weight decay of 10−4, and an initial learning rate of 0.1. The batch size is set to 256 and we train the ResNet-18 model 90 epochs. The learning rate is decreased by a factor of 10 at epoch 30 and 60, respectively. Besides, since the raw images in the ImageNet dataset are of different sizes, we resize them to 3× 224× 224 before adding triggers.
B.3 MORE DETAILS ABOUT DEFENSE SETTINGS
Settings for NC. We conduct reverse engineering and anomaly detection based on its open-source code5. We implement the ‘unlearning’ method to patch attacked models, as suggested in its paper (Wang et al., 2019a). We randomly select 5% benign training samples as the local benign dataset, which is used in the ‘unlearning’ process. Unless otherwise specified, other settings are the same as those used in (Wang et al., 2019a).
Settings for NAD. We implement this method based on its open-source code6. The origin NAD only conducted experiments on the WideResNet model. In our paper, we calculate the NAD loss over the last residual group for the ResNet-18. The local benign dataset is the same as the one adopted in NC, which is used in the fine-tuning and distillation process of NAD. Unless otherwise specified, other settings are the same as those used in (Li et al., 2021a).
Settings for DPSGD. The original DPSGD was conducted on the MNIST dataset implemented by the TensorFlow Framework. In this paper, we re-implement it based on the differentially private SGD method provided by the Opacus7. Specifically, we replace the original SGD optimizer with the differentially private one, as suggested in (Du et al., 2020). There are two important hyperparameters in DPSGD, including noise scales σ and the clipping bound C. In the experiments, we set C = 1 and select the best σ by the grid-search.
4https://github.com/pytorch/examples/tree/master/imagenet 5https://github.com/bolunwang/backdoor 6https://github.com/bboylyg/NAD 7https://github.com/pytorch/opacus
Settings for ShrinkPad. We set the shrinking rate to 10% on all datasets, as suggested in (Li et al., 2021b; Zeng et al., 2021b). Following their settings, we pad 0-pixels at the bottom right of the shrunk image to expand it to its original size.
Settings for our Defense. In this first stage, We adopt SimCLR (Chen et al., 2020a) to perform self-supervised learning. We train backbones 100 instead of 1,000 epochs to reduce computational costs while preserving effectiveness. Other settings are the same as those described in Section A. We use the same settings across all datasets, models, and attacks; In the second stage, we use the Adam optimizer with a learning rate of 0.002 and set the batch size to 128. We train the fully connected layers 10 epochs with the SCE loss (Wang et al., 2019b). Two hyper-parameters involved in the SCE (i.e., α and β in the original paper) are set to 0.1 and 1, respectively. After that, we filter 50% high-credible samples. We use the same settings across all datasets, models, and attacks; In the third stage, we adopt the MixMatch (Berthelot et al., 2019) for semi-supervised fine-tuning with settings suggested in its original paper. Specifically, we use the Adam optimizer with a learning rate of 0.002, the batch size of 64, and finetune the model 190 epochs on the CIFAR-10 and 80 epochs on the ImageNet dataset, respectively. We set the temperature T = 0.5 and the weight of unsupervised loss λu = 15 on the CIFAR-10 and λu = 6 on the ImageNet dataset, respectively. Moreover, we re-filter high-credible samples after every epoch of the third stage based on the SCE loss.
C DEFENDING AGAINST ATTACKS ON VGGFACE2 DATASET
Dataset and DNN. Due to the limitations of computational resources and time, we adopt a subset randomly selected from the original VGGFace2 (Cao et al., 2018). More details are in Table 8.
Settings for Attacks. For the training of models on the VGGFace2 dataset, the batch size is set to 32 and we conduct experiments on the DenseNet-121 model (Huang et al., 2017). An example of poisoned samples generated by different attacks are in Figure 5. Other settings are the same as those used on the ImageNet dataset.
Settings for Defenses. For NAD, we calculate the NAD loss over the second to last layer for the DenseNet-121. Other settings are the same as those used on the ImageNet dataset.
Results. As shown in Table 9, our defense still reaches the best performance even compared with NC and NAD. Specifically, the BA of NC is on par with that of our method whereas it is with the sacrifice of ASR. These results verify the effectiveness of our defense again.
D SEARCHING BEST RESULTS FOR DPSGD AND NAD
The effectiveness of DPSGD and NAD is sensitive to their hyper-parameters. Here we search for their best results based on the criteria that ‘BA − ASR’ reaches the highest value after the defense.
D.1 SEARCHING BEST RESULTS FOR DPSGD
In general, the larger the σ, the smaller the ASR while also the smaller the BA. The results of DPSGD are shown in Table 10-12, where the best results are marked in boldface.
D.2 SEARCHING BEST RESULTS FOR NAD
We found that the fine-tuning stage of NAD is sensitive to the learning rate. We search the best initial learning rate from {0.1, 0.01, 0.001}. As shown in Table 13-15, a very large learning rate significantly reduces the BA, while a very small learning rate can not reduce the ASR effectively. To keep a relatively large BA while maintaining a small ASR, we set η = 0.01 in the fine-tuning stage.
The distillation stage of NAD is also sensitive to its hyper-parameter β. We select the best β via the grid-search. The results are shown in Table 16-19.
E DEFENDING AGAINST LABEL-CONSISTENT ATTACK WITH A SMALLER POISONING RATE
For the label-consistent attack, except for the 2.5% poisoning rate examined in the main manuscript, 0.6% is also an important setting provided in its original paper (Turner et al., 2019). In this section, we compare different defenses against the label-consistent attack with poisoning rate γ = 0.6%.
As shown in Table 20, when defending against label-consistent attack with a 0.6% poisoning rate, our method is still significantly better than defenses having the same requirements (i.e., DPSGD and ShrinkPad). Even compared with those having the additional requirement (i.e., NC and NAD) under their best settings, our defense is still better or on par with them under the default settings. These results verify the effectiveness of our method again.
F DEFENDING AGAINST ATTACKS WITH DIFFERENT TRIGGER PATTERNS
In this section, we verify whether DBD is still effective when different trigger patterns are adopted.
Settings. For simplicity, we adopt the BadNets on the CIFAR-10 dataset as an example for the discussion. Specifically, we change the location and size of the backdoor trigger while keeping other settings unchanged to evaluate the BA and ASR before and after our defense.
Results. As shown in Table 21, although there are some fluctuations, the ASR is smaller than 2% while the BA is greater than 92% in every cases. In other words, our method is effective in defending against attacks with different trigger patterns.
G DEFENDING AGAINST ATTACKS WITH DYNAMIC TRIGGERS
In this section, we verify whether DBD is still effective when attackers adopt dynamic triggers.
Settings. We compare DBD with MESA (Qiao et al., 2019) in defending the dynamic attack discussed in (Qiao et al., 2019) on the CIFAR-10 dataset as an example for the discussion. This dynamic attack uses a distribution of triggers instead of a fixed trigger.
Results. The BA and ASR of DBD are 92.4% and 0.4%, while those of MESA are 94.8% and 2.4%. However, we find MESA failed in defending against blended attack (for it can not correctly detect the trigger) whereas DBD is still effective. These results verified the effectiveness of our defense.
H DISCUSSIONS
H.1 EFFECTS OF HYPER-PARAMETERS
Settings. Here we analyze the effect of filtering rate α, which is the only key method-related hyperparameter in our DBD. We adopt the results on the CIFAR-10 dataset for discussion. Except for the studied parameter α, other settings are the same as those used in Section 5.2.
30 40 50 60 Filtering Rate (%)
90 92 B A (% )
30 40 50 60 Filtering Rate (%)
0
1
A SR
(% )
BadNets Blended WaNet Label Consistent
Figure 6: The effects of filtering rate. 0 5 10 15 20 Poisoning Rate (%)
91
92
93
B A
(% )
0 5 10 15 20 Poisoning Rate (%)
1
2
3
A SR
(% )
BadNets Blended WaNet
Figure 7: The effects of poisoning rate.
Results. The number of labeled samples used in the third stage increase with the increase of filtering rate α, while the probability that the filtered high-credible dataset contains poisoned samples also increases. As shown in Figure 6, DBD can still maintain relatively high benign accuracy even when the filtering rate α is relatively small (e.g., 30%). It is mostly due to the high-quality of learned purified feature extractor and the semi-supervised fine-tuning process. DBD can also reach a nearly 0% attack success rate in all cases. However, we also have to notice that the high-credible dataset may contain poisoned samples when α is very large, which in turn creates hidden backdoors again during the fine-tuning process. Defenders should specify α based on their specific needs.
H.2 DEFENDING ATTACKS WITH VARIOUS POISONING RATES
Settings. We evaluate our method in defending against attacks with different poisoning rate γ on CIFAR-10 dataset. Except for γ, other settings are the same as those used in Section 5.2.
I MORE DETAILS ABOUT SIMCLR, SCE, AND MIXMATCH
NT-Xent Loss in SimCLR. Given a sample mini-batch containing N different samples, SimCLR first applies two separate data augmentations toward each sample to obtain 2N augmented samples. The loss for a positive pair of sample (i, j) can be defined as:
Li,j = − log exp (sim (zi, zj) /τ)∑2N
k=1 I{k 6= i} · exp (sim (zi, zk) /τ) , (5)
where sim(·, ·) is the cosine similarity, zi is the feature representation of sample i, τ is the temperature parameter, and I{k 6= i} ∈ {0, 1} indicating whether k 6= i. The NT-Xent Loss is computed across all 2N positive pairs in this mini-batch.
SCE. The symmetric cross entropy (SCE) can be defined as: LSCE = H(p, q) +H(q, p), (6)
where H(p, q) is the cross entropy, H(q, p) is the reversed cross entropy, p is the prediction, and q is the one-hot label (of the evaluated sample).
MixMatch Loss. For a batch X of labeled samples and a batch U of unlabeled samples (|X | = |U|), MixMatch produces a guessed label q̄ for each unlabled sample u ∈ U and applies MixUp (Zhang et al., 2018) to obtain the augmented X ′ and U ′ . The loss LX and LU can be defined as:
LX = 1 |X ′| ∑
(x,q)∈X ′ H (px, q) , (7)
where px is the prediction of x, q is its one-hot label, and H(·, ·) is the cross entropy.
LU = 1 K · |U ′| ∑
(u,q̄)∈U ′ ‖pu − q̄‖22 , (8)
where pu is the prediction of u, q̄ is its guessed one-hot label, and K is the number of classes.
By combining LX with LU , the MixMatch loss can be defined as: L = LX + λU · LU , (9)
where λU is a hyper-parameter.
J COMPUTATIONAL FACILITIES
We conduct all experiments on two Ubuntu 18.04 servers having different GPUs. One has four NVIDIA GeForce RTX 2080 Ti GPUs with 11GB memory (dubbed ‘RTX 2080Ti’) and the another has three NVIDIA Tesla V100 GPUs with 32GB memory (dubbed ‘V100’).
Computational Facilities for Attacks. All experiments are conducted with a single RTX 2080 Ti.
Computational Facilities for Defenses. Since we do not use a memory-efficient implementation of DenseNet-121, we conduct DPSGD experiments on the VGGFace2 dataset with a single V100. Other experiments of baseline defenses are conducted with a single RTX 2080 Ti. For our defense, we adopt PyTorch (Paszke et al., 2019) distributed data-parallel and automatic mixed precision training (Micikevicius et al., 2018) with two RTX 2080 Ti for self-supervised learning on the VGGFace2 dataset. Other experiments are conducted with a single RTX 2080 Ti.
K COMPUTATIONAL COST
In this section, we analyze the computational cost of our method stage by stage, compared to standrad supervised learning.
Stage 1. Self-supervised learning is known to have a higher computational cost than standard supervised learning (Chen et al., 2020a; He et al., 2020). In our experiments, SimCLR requires roughly four times the computational cost of standard supervised learning. Since we intend to get a purified instead of well-trained feature extractor, we train the feature extractor (i.e., backbone) lesser epochs than the original SimCLR to reduce the training time. As described in Section B.3, we find 100 epochs is enough to preserve effectiveness.
Stage 2. Since we freeze the backbone and only train the remaining fully connected layers, the computational cost is roughly 60% of standard supervised learning.
Stage 3. Semi-supervised learning is known to have a extra labeling cost compared with standard supervised learning (Gao et al., 2020). In our experiments, MixMatch requires roughly two times the computation cost of standard supervised learning.
We will explore a more computational efficient training method in our future work.
L COMPARING OUR DBD WITH DETECTION-BASED BACKDOOR DEFENSES
In this paper, we do not intend to filter malicious and benign samples accurately, as we mentioned in Section 4.4. However, we notice that the second stage of our DBD can serve as a detection-based backdoor defense for it can filter poisoned samples. In this section, we compare the filtering ability of our DBD (stage 2) with existing detection-based backdoor defenses.
Settings. We compare our DBD with two representative detection-based methods, including, Spectral Signatures (SS) (Tran et al., 2018) and Activation Clustering (AC) (Chen et al., 2019), on the CIFAR-10 dataset. These detection-based methods (e.g., SS and AC) filter malicious samples from the training set and train the model on non-malicious samples. Specifically, we re-implement SS in PyTorch based on its official code8 and adopt the open-source code9 for AC, following the settings in their original paper. In particular, since SS filters 1.5ε malicious samples for each class, where ε is the key hyper-parameter means the upper bound of the number of poisoned training samples, we adopt different ε for a fair comparison.
Results. As shown in Table 22-23, the filtering performance of DBD is on par with that of SS and AC. DBD is even better than those methods when filtering poisoned samples generated by more complicated attacks (i.e., WaNet and Label-Consistent). Besides, we also conduct the standard training on non-malicious samples filtered by SS and AC. As shown in Table 24, the hidden backdoor will still be created in many cases, even though the detection-based defenses are sometimes accurate.
8https://github.com/MadryLab/backdoor_data_poisoning 9https://github.com/ain-soph/trojanzoo/blob/main/trojanvision/defenses/
backdoor/activation_clustering.py
This is mainly because these methods may not able to remove enough poisoned samples while preserving enough benign samples simultaneously, i.e., there is a trade-off between BA and ASR.
M DBD WITH DIFFERENT SELF-SUPERVISED METHODS
In this paper, we believe that the desired feature extractor is mapping visually similar inputs to similar positions in the feature space, such that poisoned samples will be separated into their source classes. This goal is compatible with that of self-supervised learning. We believe that any selfsupervised learning can be adopted in our method. To further verify this point, we replace the adopted SimCLR with other self-supervised methods in our DBD and examine their performance.
Settings. We replace the SimCLR with two other self-supervised methods, including MoCo-V2 (Chen et al., 2020b) and BYOL (Grill et al., 2020), in our DBD. Except for the adopted selfsupervised method, other settings are the same as those used in Section 5.2.
Results. As shown in Table 25, all DBD variants have similar performances. In other words, our DBD is not sensitive to the selection of self-supervised methods.
N DBD WITH DIFFERENT LABEL-NOISE LEARNING METHODS
In the main manuscript, we adopt SCE as the label-noise learning method in our second stage. In this section, we explore whether our DBD is still effective if other label-noise methods are adopted.
Settings. We replace SCE in our DBD with two other label-noise learning methods, including generalized cross entropy (GCE) (Zhang & Sabuncu, 2018) and active passive loss (APL) (Ma et al., 2020). Specifically, we adopt the combination of NCE+RCE in APL and use the default hyperparameters suggested in their original paper. Except for the adopted label-noise learning method, other settings are the same as those used in Section 5.2.
Results. As shown in Table 26, all DBD variants are effective in reducing backdoor threats (i.e., low ASR) while maintaining high benign accuracy. In other words, our DBD is not sensitive to the selection of label-noise learning methods.
O ANALYZING WHY OUR DBD IS EFFECTIVE IN DEFENDING AGAINST LABEL-CONSISTENT ATTACK
In general, the good defense performance of our DBD method against the label-consistent attack (which is one of the clean-label attacks) can be explained from the following aspects:
Firstly, as shown in Figure 1, there is a common observation across different attacks (including both poisoned- and clean-label attacks) that poisoned samples tend to gather together in the feature space learned by the standard supervised learning. The most intuitive idea of our DBD is to prevent such a gathering in the learned feature space, which is implemented by self-supervised learning. As shown in Figure 1(d), the poisoned samples of label-consistent attack are also separated into different areas in the feature space learned by self-supervised learning. This example gives an intuitive explanation about why our DBD can successfully defend against the label-consistent attack.
Furthermore, it is interesting to explore why the poisoned samples in the label-consistent attack are separated under self-supervised learning since all poisoned samples are from the same target class, rather than from different source classes in poisoned-label attacks. For each poisoned sample in this attack, there are two types of features: the trigger and the benign feature with (untargeted) adversarial perturbations. From the perspective of DNNs, benign samples with (untargeted) adversarial perturbations are similar to samples from different source classes, though these samples look similar from the human’s perspective. Thus, it is not surprising that poisoned samples in clean-label attacks can also be separated under self-supervised learning, just like those in poisoned-label attacks. | 1. What is the focus and contribution of the paper regarding backdoor attacks?
2. What are the strengths of the proposed method, particularly in its ability to scatter poisoned data points in the feature space?
3. Are there any recent works that the paper could compare itself to, especially those based on data removal?
4. What are the necessary conditions for a feature extractor to scatter poisoned data points in the feature space, and can other feature extractors achieve this property?
5. How does the final fine-tuning step impact the performance of DBD, and how would removing it affect the results?
6. Can the authors provide more information about their adaptive attack settings and how they tuned the hyperparameters?
7. How might an attack work around the proposed defense, and how could the defense be modified to counter such workarounds? | Summary Of The Paper
Review | Summary Of The Paper
This paper shows that self-supervised, contrastive learning can give a feature extractor that scatters training data points with backdoor triggers in the feature space. With this observation, the authors propose a novel defense method based on contrastive learning and decouple end-to-end training to defend against backdoor attacks. They first train a feature extractor using self-supervised contrastive learning that turns the poisoned data points into outliers in the feature space. Then they train a cascade classifier that ignores the poisoned data points by leveraging the fact that a neural network tends to capture frequent patterns. Experiments are conducted and the results verify the effectiveness of the defense.
Review
The paper proposes a novel defense method named DBD based on contrastive learning (SimCLR) and conduct extensive experiments to show that DBD is effective against different types of attacks, including BadNets, Blended, WaNet, and Label-Consistent attacks.
However, the author did not compare DBD with some recently works such as "Spectral signatures in backdoor attacks" in NIPS 2018 and "Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering" in AAAI 2019, which both are based on data removal. It is unclear how differently DBD removes the poisoned data as compared with the existing works.
While the authors claim that their major contribution was to decouple the end-to-end training process, I suspect the proposed approach works only with SimCLR. What is the necessary conditions that makes a feature extractor scatter poisoned data points in the feature space? Can you point out other feature extractors having the same properties as SimCLR? Does DBD works with these extractors too?
The effect of the final fine-tuning step is unclear. How does DBD perform without this phase?
I also suggest the authors to move the section about the resistance to adaptive attacks in Appendix into the main paper as the adaptive attacks are becoming a more serious threats today. Please explain your adaptive attack settings more clearly (for example, what trigger size you used and how you tuned the hyper-parameters).
It would also be good if the author discuss how an attack may work around the proposed defense, and how to further defend such workarounds.
Edit after rebuttal:
The reviewer thank the authors for their response. Most of my concerns have been addressed. In particular, the DBD seems to be effective against the adaptive attacks with a large trigger size. The experiments also show that DBD works with other self-supervised learning methods. Also, comparison with the baseline methods such as Spectral Signatures have been made. Due to the above, I raise my score. |
ICLR | Title
Backdoor Defense via Decoupling the Training Process
Abstract
Recent studies have revealed that deep neural networks (DNNs) are vulnerable to backdoor attacks, where attackers embed hidden backdoors in the DNN model by poisoning a few training samples. The attacked model behaves normally on benign samples, whereas its prediction will be maliciously changed when the backdoor is activated. We reveal that poisoned samples tend to cluster together in the feature space of the attacked DNN model, which is mostly due to the endto-end supervised training paradigm. Inspired by this observation, we propose a novel backdoor defense via decoupling the original end-to-end training process into three stages. Specifically, we first learn the backbone of a DNN model via self-supervised learning based on training samples without their labels. The learned backbone will map samples with the same ground-truth label to similar locations in the feature space. Then, we freeze the parameters of the learned backbone and train the remaining fully connected layers via standard training with all (labeled) training samples. Lastly, to further alleviate side-effects of poisoned samples in the second stage, we remove labels of some ‘low-credible’ samples determined based on the learned model and conduct a semi-supervised fine-tuning of the whole model. Extensive experiments on multiple benchmark datasets and DNN models verify that the proposed defense is effective in reducing backdoor threats while preserving high accuracy in predicting benign samples. Our code is available at https://github.com/SCLBD/DBD.
1 INTRODUCTION
Deep learning, especially deep neural networks (DNNs), has been widely adopted in many realms (Wang et al., 2020b; Li et al., 2020a; Wen et al., 2020) for its high effectiveness. In general, the training of DNNs requires a large amount of training samples and computational resources. Accordingly, third-party resources (e.g., third-party data or servers) are usually involved. While the opacity of the training process brings certain convenience, it also introduces new security threats.
Backdoor attack poses a new security threat to the training process of DNNs (Li et al., 2020c). It maliciously manipulates the prediction of the attacked DNNs by poisoning a few training samples. Specifically, backdoor attackers inject the backdoor trigger (i.e., a particular pattern) to some benign training images and change their labels with the attacker-specified target label. The connection between the backdoor trigger and the target label will be learned by DNNs during the training process. In the inference process, the prediction of attacked DNNs will be changed to the target label when the trigger is present, whereas the attacked DNNs will behave normally on benign samples. As such, users are difficult to realize the existence of hidden backdoors and therefore this attack is a serious threat to the practical applications of DNNs.
In this paper, we first investigate backdoor attacks from the hidden feature space. Our preliminary experiments reveal that the backdoor is embedded in the feature space, i.e., samples with the back-
∗The first two authors contributed equally to this work. This work was mostly done when Kunzhe Huang and Yiming Li were the research interns at The Chinese University of Hong Kong, Shenzhen. † indicates corresponding authors: Baoyuan Wu (wubaoyuan@cuhk.edu.cn) and Zhan Qin (qinzhan@zju.edu.cn).
door trigger (dubbed poisoned samples) tend to cluster together in the feature space. We reveal that this phenomenon is mostly due to the end-to-end supervised training paradigm. Specifically, the excessive learning capability allows DNNs to learn features about the backdoor trigger, while the DNNs can shrink the distance between poisoned samples in the feature space and connect the learned trigger-related features with the target label by the end-to-end supervised training. Based on this understanding, we propose to decouple the end-to-end training process for the backdoor defense. Specifically, we treat the DNNs as two disjoint parts, including a feature extractor (i.e., backbone) and a simple classifier (i.e., the remaining fully connected layers). We first learn the purified feature extractor via self-supervised learning (Kolesnikov et al., 2019; Chen et al., 2020a; Jing & Tian, 2020) with unlabeled training samples (obtained by removing their labels), and then learn the simple classifier via standard supervised training process based on the learned feature extractor and all training samples. The strong data augmentations involved in the self-supervised learning damage trigger patterns, making them unlearnable during representation learning; and the decoupling process further disconnects trigger patterns and the target label. Accordingly, hidden backdoors cannot be successfully created even the model is trained on the poisoned dataset based on our defense.
Moreover, we further reveal that the representation of poisoned samples generated by the purified extractor is significantly different from those generated by the extractor learned with standard training process. Specifically, the poisoned sample lies closely to samples with its ground-truth label instead of the target label. This phenomenon makes the training of the simple classifier similar to label-noise learning (Wang et al., 2019b; Ma et al., 2020; Berthon et al., 2021). As such, we first filter high-credible training samples (i.e., training samples that are most probably to be benign) and then use those samples as labeled samples and the remaining part to form unlabeled samples to fine-tune the whole model via semi-supervised learning (Rasmus et al., 2015; Berthelot et al., 2019; Sohn et al., 2020). This approach is to further reduce the adverse effects of poisoned samples.
The main contributions of this paper are three-fold. (1) We reveal that the backdoor is embedded in the feature space, which is mostly due to the end-to-end supervised training paradigm. (2) Based on our understanding, we propose a decoupling-based backdoor defense (DBD) to alleviate the threat of poisoning-based backdoor attacks. (3) Experiments on classical benchmark datasets are conducted, which verify the effectiveness of our defense.
2 RELATED WORK
2.1 BACKDOOR ATTACK
Backdoor attack is an emerging research area, which raises security concerns about training with third-party resources. In this paper, we focus on the poisoning-based backdoor attack towards image classification, where attackers can only modify the dataset instead of other training components (e.g., training loss). This threat could also happen in other tasks (Xiang et al., 2021; Zhai et al., 2021; Li et al., 2022) and with different attacker’s capacities (Nguyen & Tran, 2020; Tang et al., 2020; Zeng et al., 2021a), which are out-of-scope of this paper. In general, existing attacks can be divided into two main categories based on the property of target labels, as follows:
Poison-Label Backdoor Attack. It is currently the most common attack paradigm, where the target label is different from the ground-truth label of poisoned samples. BadNets (Gu et al., 2019) is the first and most representative poison-label attack. Specifically, it randomly selected a few samples from the original benign dataset to generate poisoned samples by stamping the backdoor trigger onto the (benign) image and change their label with an attacker-specified target label. Those generated poisoned samples associated with remaining benign ones were combined to form the poisoned training dataset, which will be delivered to users. After that, (Chen et al., 2017) suggested that the poisoned image should be similar to its benign version for the stealthiness, based on which they proposed the blended attack. Recently, (Xue et al., 2020; Li et al., 2020b; 2021c) further explored how to conduct poison-label backdoor attacks more stealthily. Most recently, a more stealthy and effective attack, the WaNet (Nguyen & Tran, 2021), was proposed. WaNet adopted image warping as the backdoor trigger, which deforms but preserves the image content.
Clean-Label Backdoor Attack. Although the poisoned image generated by poison-label attacks could be similar to its benign version, users may still notice the attack by examining the image-label relationship. To address this problem, Turner et al. (2019) proposed the clean-label attack paradigm, where the target label is consistent with the ground-truth label of poisoned samples. Specifically,
they first leveraged adversarial perturbations or generative models to modify some benign images from the target class and then conducted the standard trigger injection process. This idea was generalized to attack video classification in (Zhao et al., 2020b), where they adopted the targeted universal adversarial perturbation (Moosavi-Dezfooli et al., 2017) as the trigger pattern. Although clean-label backdoor attacks are more stealthy compared with poison-label ones, they usually suffer from relatively poor performance and may even fail in creating backdoors (Li et al., 2020c).
2.2 BACKDOOR DEFENSE
Currently, there are also some approaches to alleviate the backdoor threat. Existing defenses are mostly empirical, which can be divided into five main categories, including (1) detection-based defenses (Xu et al., 2021; Zeng et al., 2021a; Xiang et al., 2022), (2) preprocessing based defenses (Doan et al., 2020; Li et al., 2021b; Zeng et al., 2021b), (3) model reconstruction based defenses (Zhao et al., 2020a; Li et al., 2021a; Zeng et al., 2022), (4) trigger synthesis based defenses (Guo et al., 2020; Dong et al., 2021; Shen et al., 2021), and (5) poison suppression based defenses (Du et al., 2020; Borgnia et al., 2021). Specifically, detection-based defenses examine whether a suspicious DNN or sample is attacked and it will deny the use of malicious objects; Preprocessing based methods intend to damage trigger patterns contained in attack samples to prevent backdoor activation by introducing a preprocessing module before feeding images into DNNs; Model reconstruction based ones aim at removing the hidden backdoor in DNNs by modifying models directly; The fourth type of defenses synthesize potential trigger patterns at first, following by the second stage that the hidden backdoor is eliminated by suppressing their effects; The last type of methods depress the effectiveness of poisoned samples during the training process to prevent the creation of hidden backdoors. In general, our method is most relevant to this type of defenses.
In this paper, we only focus on the last four types of defenses since they directly improve the robustness of DNNs. Besides, there were also few works focusing on certified backdoor defenses (Wang et al., 2020a; Weber et al., 2020). Their robustness is theoretically guaranteed under certain assumptions, which cause these methods to be generally weaker than empirical ones in practice.
2.3 SEMI-SUPERVISED AND SELF-SUPERVISED LEARNING
Semi-supervised Learning. In many real-world applications, the acquisition of labeled data often relies on manual labeling, which is very expensive. In contrast, obtaining unlabeled samples is much easier. To utilize the power of unlabeled samples with labeled ones simultaneously, a great amount of semi-supervised learning methods were proposed (Gao et al., 2017; Berthelot et al., 2019; Van Engelen & Hoos, 2020). Recently, semi-supervised learning was also introduced in improving the security of DNNs (Stanforth et al., 2019; Carmon et al., 2019), where they utilized unlabelled samples in the adversarial training. Most recently, (Yan et al., 2021) discussed how to backdoor semi-supervised learning. However, this approach needs to control other training components (e.g., training loss) in addition to modifying training samples and therefore is out-of-scope of this paper. How to adopt semi-supervised learning for backdoor defense remains blank.
Self-supervised Learning. This learning paradigm is a subset of unsupervised learning, where DNNs are trained with supervised signals generated from the data itself (Chen et al., 2020a; Grill et al., 2020; Liu et al., 2021). It has been adopted for increasing adversarial robustness (Hendrycks et al., 2019; Wu et al., 2021; Shi et al., 2021). Most recently, there were also a few works (Saha et al., 2021; Carlini & Terzis, 2021; Jia et al., 2021) exploring how to backdoor self-supervised learning. However, these attacks are out-of-scope of this paper since they need to control other training components (e.g., training loss) in addition to modifying training samples.
3 REVISITING BACKDOOR ATTACKS FROM THE HIDDEN FEATURE SPACE
In this section, we analyze the behavior of poisoned samples from the hidden feature space of attacked models and discuss its inherent mechanism.
Settings. We conduct the BadNets (Gu et al., 2019) and label-consistent attack (Turner et al., 2019) on CIFAR-10 dataset (Krizhevsky, 2009) for the discussion. They are representative of poison-label attacks and clean-label attacks, respectively. Specifically, we conduct supervised learning on the poisoned datasets with the standard training process and self-supervised learning on the unlabelled
poisoned datasets with SimCLR (Chen et al., 2020a). We visualize poisoned samples in the hidden feature space generated by attacked DNNs based on the t-SNE (Van der Maaten & Hinton, 2008). More detailed settings are presented in Appendix A.
Results. As shown in Figure 1(a)-1(b), poisoned samples (denoted by ‘black-cross’) tend to cluster together to form a separate cluster after the standard supervised training process, no matter under the poison-label attack or clean-label attack. This phenomenon implies why existing poisoning-based backdoor attacks can succeed. Specifically, the excessive learning capability allows DNNs to learn features about the backdoor trigger. Associated with the end-to-end supervised training paradigm, DNNs can shrink the distance between poisoned samples in the feature space and connect the learned trigger-related features with the target label. In contrast, as shown in Figure 1(c)-1(d), poisoned samples lie closely to samples with their ground-truth label after the self-supervised training process on the unlabelled poisoned dataset. It indicates that we can prevent the creation of backdoors by self-supervised learning, which will be further introduced in the next section.
4 DECOUPLING-BASED BACKDOOR DEFENSE
4.1 PRELIMINARIES
General Pipeline of Backdoor Attacks. Let D = {(xi, yi)}Ni=1 denotes the benign training set, where xi ∈ X = {0, 1, . . . , 255}C×W×H is the image, yi ∈ Y = {0, 1, . . . ,K} is its label,K is the number of classes, and yt ∈ Y indicates the target label. How to generate the poisoned datasetDp is the cornerstone of backdoor attacks. Specifically,Dp consists of two subsets, including the modified version of a subset of D and remaining benign samples, i.e., Dp = Dm ∪ Db, where Db ⊂ D, γ , |Dm||D| is the poisoning rate, Dm = {(x
′, yt)|x′ = G(x), (x, y) ∈ D\Db}, and G : X → X is an attacker-predefined poisoned image generator. For example, G(x) = (1−λ)⊗x+λ⊗ t, where λ ∈ [0, 1]C×W×H , t ∈ X is the trigger pattern, and ⊗ is the element-wise product in the blended attack (Chen et al., 2017). Once Dp is generated, it will be sent to users who will train DNNs on it. Hidden backdoors will be created after the training process.
Threat Model. In this paper, we focus on defending against poisoning-based backdoor attacks. The attacker can arbitrarily modify the training set whereas cannot change other training components (e.g., model structure and training loss). For our proposed defense, we assume that defenders can fully control the training process. This is the scenario that users adopt third-party collected samples for training. Note that we do not assume that defenders have a local benign dataset, which is often required in many existing defenses (Wang et al., 2019a; Zhao et al., 2020a; Li et al., 2021a).
Defender’s Goals. The defender’s goals are to prevent the trained DNN model from predicting poisoned samples as the target label and to preserve the high accuracy on benign samples.
4.2 OVERVIEW OF THE DEFENSE PIPELINE
In this section, we describe the general pipeline of our defense. As shown in Figure 2, it consists of three main stages, including (1) learning a purified feature extractor via self-supervised learning, (2) filtering high-credible samples via label-noise learning, and (3) semi-supervised fine-tuning.
Specifically, in the first stage, we remove the label of all training samples to form the unlabelled dataset, based on which to train the feature extractor via self-supervised learning. In the second stage, we freeze the learned feature extractor and adopt all training samples to train the remaining fully connected layers via supervised learning. We then filter α% high-credible samples based on the training loss. The smaller the loss, the more credible the sample. After the second stage, the training set will be separated into two disjoint parts, including high-credible samples and lowcredible samples. We use high-credible samples as labeled samples and remove the label of all low-credible samples to fine-tune the whole model via semi-supervised learning. More detailed information about each stage of our method will be further illustrated in following sections.
4.3 LEARNING PURIFIED FEATURE EXTRACTOR VIA SELF-SUPERVISED LEARNING
Let Dt denotes the training set and fw : X → [0, 1]K indicates the DNN with parameter w = [wc,wf ], wherewc andwf indicates the parameters of the backbone and the fully connected layer, respectively. In this stage, we optimizewc based on the unlabeled version of Dt via self-supervised learning, as follows:
w∗c = arg min wc ∑ (x,y)∈Dt L1(x;wc), (1)
where L1(·) indicates the self-supervised loss (e.g., NT-Xent in SimCLR (Chen et al., 2020a)). Through the self-supervised learning, the learned feature extractor (i.e., backbone) will be purified even if the training set contains poisoned samples, as illustrated in Section 3.
4.4 FILTERING HIGH-CREDIBLE SAMPLES VIA LABEL-NOISE LEARNING
Once w∗c is obtained, the user can freeze it and adopt Dt to further optimize remaining wf , i.e.,
w∗f = arg min wf ∑ (x,y)∈Dt L2 ( f[w∗c ,wf ](x), y ) , (2)
where L2(·) indicates the supervised loss (e.g., cross entropy). After the decoupling-based training process (1)-(2), even if the model is (partly) trained on the poisoned dataset, the hidden backdoor cannot be created since the feature extractor is purified. However, this simple strategy suffers from two main problems. Firstly, compared with the one trained via supervised learning, the accuracy of predicting benign samples will have a certain decrease, since the learned feature extractor is frozen in the second stage. Secondly, poisoned samples will serve as ‘outliers’ to further hinder the learning of the second stage when poison-label attacks appear, since those samples lie close to samples with its ground-truth label instead of the target label in the hidden feature space generated by the learned purified feature extractor. These two problems indicate that we should remove poisoned samples and retrain or fine-tune the whole model.
Specifically, we select high-credible samples Dh based on the loss L2(·; [w∗c ,w∗f ]). The highcredible samples are defined as the α% training samples with the smallest loss, where α ∈ [0, 100] is
a hyper-parameter. In particular, we adopt the symmetric cross-entropy (SCE) (Wang et al., 2019b) as L2(·), inspired by the label-noise learning. As shown in Figure 3, compared with the CE loss, the SCE can significantly increase the differences between poisoned samples and benign ones, which further reduces the possibility that high-credible dataset Dh still contains poisoned samples. Note that we do not intend to accurately separate poisoned samples and benign samples. We only want to ensure that the obtained Dh contains as few poisoned samples as possible.
4.5 SEMI-SUPERVISED FINE-TUNING
After the second stage, the third-party training setDt will be separated into two disjoint parts, including the high-credible dataset Dh and the low-credible dataset Dl , Dt\Dh. Let D̂l , {x|(x, y) ∈ Dl} indicates the unlabeled version of low-credible dataset Dl. We fine-tune the whole trained model f[w∗c ,w∗f ](·) with semi-supervised learning as follows:
min w L3(Dh, D̂l;w), (3)
where L3(·) denotes the semi-supervised loss (e.g., the loss in MixMatch (Berthelot et al., 2019)). This process can prevent the side-effects of poisoned samples while utilizing their contained useful information, and encourage the compatibility between the feature extractor and the simple classifier via learning them jointly instead of separately. Please refer to Section 5.3 for more results.
5 EXPERIMENTS
5.1 EXPERIMENTAL SETTINGS
Datasets and DNNs. We evaluate all defenses on two classical benchmark datasets, including CIFAR-10 (Krizhevsky, 2009) and (a subset of) ImageNet (Deng et al., 2009). We adopt the ResNet18 (He et al., 2016) for these tasks. More detailed settings are presented in Appendix B.1. Besides, we also provide the results on (a subset of) VGGFace2 (Cao et al., 2018) in Appendix C.
Attack Baselines. We examine all defense approaches in defending against four representative attacks. Specifically, we select the BadNets (Gu et al., 2019), the backdoor attack with blended strategy (dubbed ‘Blended’) (Chen et al., 2017), WaNet (Nguyen & Tran, 2021), and label-consistent attack with adversarial perturbations (dubbed ‘Label-Consistent’) (Turner et al., 2019) for the evaluation. They are the representative of patch-based visible and invisible poison-label attacks, nonpatch-based poison-label attacks, and clean-label attacks, respectively.
Defense Baselines. We compared our DBD with two defenses having the same defender’s capacities, including the DPSGD (Du et al., 2020) and ShrinkPad (Li et al., 2021b). We also compare with other two approaches with an additional requirement (i.e., having a local benign dataset), including
the neural cleanse with unlearning strategy (dubbed ‘NC’) (Wang et al., 2019a), and neural attention distillation (dubbed ‘NAD’) (Li et al., 2021a). They are the representative of poison suppression based defenses, preprocessing based defenses, trigger synthesis based defenses, and model reconstruction based defenses, respectively. We also provide results of DNNs trained without any defense (dubbed ‘No Defense’) as another important baseline for reference.
Attack Setups. We use a 2 × 2 square as the trigger pattern on CIFAR-10 dataset and the 32 × 32 Apple logo on ImageNet dataset for the BadNets, as suggested in (Gu et al., 2019; Wang et al., 2019a). For Blended, we adopt the ‘Hello Kitty’ pattern on CIFAR-10 and the random noise pattern on ImageNet, based on the suggestions in (Chen et al., 2017), and set the blended ratio λ = 0.1 on all datasets. The trigger pattern adopted in label-consistent attack is the same as the one used in BadNets. For WaNet, we adopt its default settings on CIFAR-10 dataset. However, on ImageNet dataset, we use different settings optimized by grid-search since the original ones fail. An example of poisoned samples generated by different attacks is shown in Figure 4. Besides, we set the poisoning rate γ1 = 2.5% for label-consistent attack (25% of training samples with the target label) and γ2 = 5% for three other attacks. More details are shown in Appendix B.2.
Defense Setups. For our DBD, we adopt SimCLR (Chen et al., 2020a) as the self-supervised method and MixMatch (Berthelot et al., 2019) as the semi-supervised method. More details about SimCLR and MixMatch are in Appendix I. The filtering rate α is the only key hyper-parameter in DBD, which is set to 50% in all cases. We set the shrinking rate to 10% for the ShrinkPad on all datasets, as suggested in (Li et al., 2021b; Zeng et al., 2021b). In particular, DPSGD and NAD are sensitive to their hyper-parameters. We report their best results in each case based on the grid-search (as shown in Appendix D). Besides, we split a 5% random subset of the benign training set as the local benign dataset for NC and NAD. More implementation details are provided in Appendix B.3.
Evaluation Metrics. We adopt the attack success rate (ASR) and benign accuracy (BA) to measure the effectiveness of all methods1. Specifically, let Dtest indicates the (benign) testing set and Cw : X → Y denotes the trained classifier, we have ASR , Pr(x,y)∈Dtest{Cw(G(x)) = yt|y 6= yt} and BA , Pr(x,y)∈Dtest{Cw(x) = y}, where yt is the target label and G(·) is the poisoned image generator. In particular, the lower the ASR and the higher the BA, the better the defense.
5.2 MAIN RESULTS
Comparing DBD with Defenses having the Same Requirements. As shown in Table 1-2, DBD is significantly better than defenses having the same requirements (i.e., DPSGD and ShrinkPad) in defending against all attacks. For example, the benign accuracy of DBD is 20% over while the attack success rate is 5% less than that of DPSGD in all cases. Specifically, the attack success rate of models with DBD is less than 2% in all cases (mostly < 0.5%), which verifies that our method can successfully prevent the creation of hidden backdoors. Moreover, the decreases of benign accuracy are less than 2% when defending against poison-label attacks, compared with models trained without any defense. Our method is even better on relatively larger dataset where all baseline methods become less effective. These results verify the effectiveness of our method.
1Among all defense methods, the one with the best performance is indicated in boldface and the value with underline denotes the second-best result.
Comparing DBD with Defenses having Extra Requirements. We also compare our defense with two other methods (i.e., NC and NAD), which have an additional requirement that defenders have a benign local dataset. As shown in Table 1-2, NC and NAD are better than DPSGD and ShrinkPad, as we expected, since they adopt additional information from the benign local dataset. In particular, although NAD and NC use additional information, our method is still better than them, even when their performances are tuned to the best while our method only uses the default settings. Specifically, the BA of NC is on par with that of our method. However, it is with the sacrifice of ASR. Especially on ImageNet dataset, NC has limited effects in reducing ASR. In contrast, our method reaches the smallest ASR while its BA is either the highest or the second-highest in almost all cases. These results verify the effectiveness of our method again.
Results. As shown in Figure 7, our method can still prevent the creation of hidden backdoors even when the poisoning rate reaches 20%. Besides, DBD also maintains high benign accuracy. In other words, our method is effective in defending attacks with different strengths.
5.3 ABLATION STUDY
There are four key strategies in DBD, including (1) obtaining purified feature extractor, (2) using SCE instead of CE in the second stage, (3) reducing side-effects of low-credible samples, and (4) fine-tuning the whole model via semi-supervised learning. Here we verify their effectiveness.
Settings. We compare the proposed DBD with its four variants, including (1) DBD without SS, (2) SS with CE, (3) SS with SCE, and (4) SS with SCE + Tuning, on the CIFAR-10 dataset. Specifically, in the first variant, we replace the backbone generated by self-supervised learning with the one trained in a supervised fashion and keep other parts unchanged. In the second variant, we freeze the backbone learned via self-supervised learning and train the remaining fully-connected layers with cross-entropy loss on all training samples. The third variant is similar to the second one. The only difference is that it uses symmetric cross-entropy instead of cross-entropy to train fully-connected layers. The last variant is an advanced version of the third one, which further fine-tunes fullyconnected layers on high-credible samples filtered by the third variant.
Results. As shown in Table 3, we can conclude that decoupling the original end-to-end supervised training process is effective in preventing the creation of hidden backdoors, by comparing our DBD with its first variant and the model trained without any defense. Besides, we can also verify the effectiveness of SCE loss on defending against poison-label backdoor attacks by comparing the second and third DBD variants. Moreover, the fourth DBD variant has relatively lower ASR and BA, compared with the third one. This phenomenon is due to the removal of low-credible samples. It indicates that reducing side-effects of low-credible samples while adopting their useful information is important for the defense. We can also verify that fine-tuning the whole model via semi-supervised learning is also useful by comparing the fourth variant and the proposed DBD.
5.4 RESISTANCE TO POTENTIAL ADAPTIVE ATTACKS
In our paper, we adopted the classical defense setting that attackers have no information about the defense. Attackers may design adaptive attacks if they know the existence of our DBD. The most straightforward idea is to manipulate the self-supervised training process so that poisoned samples are still in a new cluster after the self-supervised learning. However, attackers are not allowed to do it based on our threat model about adopting third-party datasets. Despite this, attackers may design adaptive attacks by optimizing the trigger pattern to make poisoned samples still in a new cluster after the self-supervised learning if they can know the model structure used by defenders, as follows:
Problem Formulation. For a K-classification problem, let X ′ = {xi}Mi=1 indicates the benign images selected for poisoning, Xj = {xi} Nj i=1 denotes the benign images with ground-truth label j, and g is a trained backbone. Given an attacker-predefined poisoned image generator G, the adaptive attack aims to optimize a trigger pattern t by minimizing the distance between poisoned images while maximizing the distance between the center of poisoned images and centers of clusters of benign images with different label, i.e.,
min t
1
M ∑ x∈X ′ d (g(G(x; t)), g′))− 1 K K∑ i=1 d (g′, gi) , (4)
where g′ , 1M ∑ x∈X ′ g(G(x; t)), gi , 1 Ni ∑ x∈Xi g(x), and d is a distance metric.
Settings. We adopt the CIFAR-10 dataset and use the `2 norm as the distance metric to conduct the experiment. Specifically, we assume that attackers have the entire benign dataset, based on which they can train a backbone adopted in the first stage of our DBD. We use the Adam optimizer to solve the above optimization problem for 100 epochs with a learning rate of 0.1. The trigger size is set to 32×32, which means the attacker can completely modify the content of poisoned samples, regardless of its original semantic information and the stealthiness of the attack. This setting is to ensure the attack ability, since clustering poisoned samples together is very difficult in self-supervised learning.
Results. The adaptive attack works well when there is no defense (BA=94.96%, ASR=99.70%). However, this attack still fails to attack our DBD (BA=93.21%, ASR=1.02%). In other words, our defense is resistant to this adaptive attack. It is most probably because the trigger optimized based on the backbone is far less effective when the model is retrained since model parameters are changed due to the random initialization and the update of model weights during the training process.
6 CONCLUSION
The mechanism of poisoning-based backdoor attacks is to establish a latent connection between trigger patterns and the target label during the training process. In this paper, we revealed that this connection is learned mostly due to the end-to-end supervised training paradigm. Motivated by this understanding, we proposed a decoupling-based backdoor defense, which first learns the backbone via self-supervised learning and then the remaining fully-connected layers by the classical supervised learning. We also introduced the label-noise learning method to determine high-credible and low-credible samples, based on which we fine-tuned the whole model via semi-supervised learning. Extensive experiments verify that our defense is effective on reducing backdoor threats while preserving high accuracy on predicting benign samples.
ACKNOWLEDGMENTS
Baoyuan Wu is supported in part by the National Natural Science Foundation of China under Grant 62076213, the University Development Fund of the Chinese University of Hong Kong, Shenzhen under Grant 01001810, and the Special Project Fund of Shenzhen Research Institute of Big Data under Grant T00120210003. Zhan Qin is supported in part by the National Natural Science Foundation of China under Grant U20A20178, the National Key Research and Development Program of China under Grant 2020AAA0107705, and the Research Laboratory for Data Security and Privacy, Zhejiang University-Ant Financial Fintech Center. Kui Ren is supported by the National Key Research and Development Program of China under Grant 2020AAA0107705.
ETHICS STATEMENT
DNNs are widely adopted in many mission-critical areas (e.g., face recognition) and therefore their security is of great significance. The vulnerability of DNNs to backdoor attacks raises serious concerns about using third-party training resources. In this paper, we propose a general training pipeline to obtain backdoor-free DNNs, even if the training dataset contains poisoned samples. This work has no ethical issues in general since our method is purely defensive and does not reveal any new vulnerabilities of DNNs. However, we need to mention that our defense can be adopted only when training with untrusted samples, and backdoor attacks could happen in other scenarios. People should not be too optimistic about eliminating backdoor threats.
REPRODUCIBILITY STATEMENT
The detailed descriptions of datasets, models, and training settings are in Appendix A-D. We also describe the computational facilities and cost in Appendix J-K. Codes of our DBD are also opensourced.
A DETAILED SETTINGS FOR REVISITING BACKDOOR ATTACKS
Attack Setups. We conduct the BadNets (Gu et al., 2019) and label-consistent attack (Turner et al., 2019) with the target label yt = 3 on the CIFAR-10 dataset (Krizhevsky, 2009). The trigger patterns are the same as those presented in Section 5.2. In particular, we implement the label-consistent attack with adversarial perturbations, as suggested in its original paper (Turner et al., 2019). Specifically, we used the projected gradient descent (PGD) (Madry et al., 2018) to generate adversarial perturbations within the `∞-ball where the maximum perturbation size = 16.
Training Setups. We conduct supervised learning on the poisoned datasets with the standard training process and the self-supervised learning on the unlabelled poisoned datasets with the SimCLR (Chen et al., 2020a). The supervised training is conducted based on the open-source code2. Specifically, we use the SGD optimizer with momentum 0.9, weight decay of 5 × 10−4, and an initial learning rate of 0.1. The batch size is set to 128 and we train the ResNet-18 model 200 epochs. The learning rate is decreased by a factor of 10 at epoch 100 and 150, respectively. Besides, we add triggers before performing the data augmentation (e.g., random crop and horizontal flipping). For the self-supervised training, we use the stochastic gradient descent (SGD) optimizer with a momentum of 0.9, an initial learning rate of 0.4, and a weight decay factor of 5 × 10−4. We use a batch size of 512, and train the backbone for 1,000 epochs. We decay the learning rate with the cosine decay schedule (Loshchilov & Hutter, 2016) without a restart. Besides, we also adopt strong data augmentation techniques, including random crop and resize (with random flip), color distortions, and Gaussian blur, as suggested in (Chen et al., 2020a). All models are trained until converge.
t-SNE Visualization Settings. We treat the output of the last residual unit as the feature representation and use the tsne-cuda library (Chan et al., 2019) to get the feature embedding of all samples. To have a better visualization, we adopt all poisoned samples and randomly select 10% benign samples for visualizing models under the supervised learning, and adopt 30% poisoned samples and 10% benign samples for those under the self-supervised learning.
B DETAILED SETTINGS FOR MAIN EXPERIMENTS
B.1 MORE DETAILS ABOUT DATASETS AND DNNS
Due to the limitations of computational resources and time, we adopt a subset randomly selected from the original ImageNet. More detailed information about the datasets and DNNs adopted in the main experiments of our paper is presented in Table 4.
B.2 MORE DETAILS ABOUT ATTACK SETTINGS
Attack Setups. We conduct the BadNets (Gu et al., 2019), blended attack (dubbed ‘Blended’) (Chen et al., 2017), label-consistent attack (dubbed ‘Label-Consistent’) (Turner et al., 2019), and WaNet (Nguyen & Tran, 2021) with the target label yt = 3 on all datasets. The trigger patterns are the same as those presented in Section 5.2. In particular, we set the blended ratio λ = 0.1 for the blended attack on all datasets and examine label-consistent attack with the maximum perturbation size ∈ {16, 32}. Besides, WaNet assumed that attackers can fully control the whole training process in its original paper. However, we found that WaNet only modified training data while other training components (e.g., training loss, training schedule, and model structure) are the same as those used in the standard training process. As such, we re-implement its code in the poisoningbased attack scenario based on its official code3. Specifically, following the settings in its original paper, we set the noise rate ρn = 0.2, control grid size k = 4, and warping strength s = 0.5 on
2https://github.com/kuangliu/pytorch-cifar 3https://github.com/VinAIResearch/Warping-based_Backdoor_Attack-release
the CIFAR-10 dataset. However, we found that the default k and s are too small to make the attack works on the ImageNet dataset (as shown in Table 5-6). Besides, the ‘noise mode’ also significantly reduces the attack effectiveness (as shown in Table 7). As such, we set k = 224 and s = 1 and train models without the noise mode on the ImageNet dataset.
Training Setups. On the CIFAR-10 dataset (Krizhevsky, 2009), the settings are the same as those described in Section A; On the ImageNet dataset (Deng et al., 2009), we conduct experiments based on the open-source code4. Specifically, we use the SGD optimizer with momentum 0.9, weight decay of 10−4, and an initial learning rate of 0.1. The batch size is set to 256 and we train the ResNet-18 model 90 epochs. The learning rate is decreased by a factor of 10 at epoch 30 and 60, respectively. Besides, since the raw images in the ImageNet dataset are of different sizes, we resize them to 3× 224× 224 before adding triggers.
B.3 MORE DETAILS ABOUT DEFENSE SETTINGS
Settings for NC. We conduct reverse engineering and anomaly detection based on its open-source code5. We implement the ‘unlearning’ method to patch attacked models, as suggested in its paper (Wang et al., 2019a). We randomly select 5% benign training samples as the local benign dataset, which is used in the ‘unlearning’ process. Unless otherwise specified, other settings are the same as those used in (Wang et al., 2019a).
Settings for NAD. We implement this method based on its open-source code6. The origin NAD only conducted experiments on the WideResNet model. In our paper, we calculate the NAD loss over the last residual group for the ResNet-18. The local benign dataset is the same as the one adopted in NC, which is used in the fine-tuning and distillation process of NAD. Unless otherwise specified, other settings are the same as those used in (Li et al., 2021a).
Settings for DPSGD. The original DPSGD was conducted on the MNIST dataset implemented by the TensorFlow Framework. In this paper, we re-implement it based on the differentially private SGD method provided by the Opacus7. Specifically, we replace the original SGD optimizer with the differentially private one, as suggested in (Du et al., 2020). There are two important hyperparameters in DPSGD, including noise scales σ and the clipping bound C. In the experiments, we set C = 1 and select the best σ by the grid-search.
4https://github.com/pytorch/examples/tree/master/imagenet 5https://github.com/bolunwang/backdoor 6https://github.com/bboylyg/NAD 7https://github.com/pytorch/opacus
Settings for ShrinkPad. We set the shrinking rate to 10% on all datasets, as suggested in (Li et al., 2021b; Zeng et al., 2021b). Following their settings, we pad 0-pixels at the bottom right of the shrunk image to expand it to its original size.
Settings for our Defense. In this first stage, We adopt SimCLR (Chen et al., 2020a) to perform self-supervised learning. We train backbones 100 instead of 1,000 epochs to reduce computational costs while preserving effectiveness. Other settings are the same as those described in Section A. We use the same settings across all datasets, models, and attacks; In the second stage, we use the Adam optimizer with a learning rate of 0.002 and set the batch size to 128. We train the fully connected layers 10 epochs with the SCE loss (Wang et al., 2019b). Two hyper-parameters involved in the SCE (i.e., α and β in the original paper) are set to 0.1 and 1, respectively. After that, we filter 50% high-credible samples. We use the same settings across all datasets, models, and attacks; In the third stage, we adopt the MixMatch (Berthelot et al., 2019) for semi-supervised fine-tuning with settings suggested in its original paper. Specifically, we use the Adam optimizer with a learning rate of 0.002, the batch size of 64, and finetune the model 190 epochs on the CIFAR-10 and 80 epochs on the ImageNet dataset, respectively. We set the temperature T = 0.5 and the weight of unsupervised loss λu = 15 on the CIFAR-10 and λu = 6 on the ImageNet dataset, respectively. Moreover, we re-filter high-credible samples after every epoch of the third stage based on the SCE loss.
C DEFENDING AGAINST ATTACKS ON VGGFACE2 DATASET
Dataset and DNN. Due to the limitations of computational resources and time, we adopt a subset randomly selected from the original VGGFace2 (Cao et al., 2018). More details are in Table 8.
Settings for Attacks. For the training of models on the VGGFace2 dataset, the batch size is set to 32 and we conduct experiments on the DenseNet-121 model (Huang et al., 2017). An example of poisoned samples generated by different attacks are in Figure 5. Other settings are the same as those used on the ImageNet dataset.
Settings for Defenses. For NAD, we calculate the NAD loss over the second to last layer for the DenseNet-121. Other settings are the same as those used on the ImageNet dataset.
Results. As shown in Table 9, our defense still reaches the best performance even compared with NC and NAD. Specifically, the BA of NC is on par with that of our method whereas it is with the sacrifice of ASR. These results verify the effectiveness of our defense again.
D SEARCHING BEST RESULTS FOR DPSGD AND NAD
The effectiveness of DPSGD and NAD is sensitive to their hyper-parameters. Here we search for their best results based on the criteria that ‘BA − ASR’ reaches the highest value after the defense.
D.1 SEARCHING BEST RESULTS FOR DPSGD
In general, the larger the σ, the smaller the ASR while also the smaller the BA. The results of DPSGD are shown in Table 10-12, where the best results are marked in boldface.
D.2 SEARCHING BEST RESULTS FOR NAD
We found that the fine-tuning stage of NAD is sensitive to the learning rate. We search the best initial learning rate from {0.1, 0.01, 0.001}. As shown in Table 13-15, a very large learning rate significantly reduces the BA, while a very small learning rate can not reduce the ASR effectively. To keep a relatively large BA while maintaining a small ASR, we set η = 0.01 in the fine-tuning stage.
The distillation stage of NAD is also sensitive to its hyper-parameter β. We select the best β via the grid-search. The results are shown in Table 16-19.
E DEFENDING AGAINST LABEL-CONSISTENT ATTACK WITH A SMALLER POISONING RATE
For the label-consistent attack, except for the 2.5% poisoning rate examined in the main manuscript, 0.6% is also an important setting provided in its original paper (Turner et al., 2019). In this section, we compare different defenses against the label-consistent attack with poisoning rate γ = 0.6%.
As shown in Table 20, when defending against label-consistent attack with a 0.6% poisoning rate, our method is still significantly better than defenses having the same requirements (i.e., DPSGD and ShrinkPad). Even compared with those having the additional requirement (i.e., NC and NAD) under their best settings, our defense is still better or on par with them under the default settings. These results verify the effectiveness of our method again.
F DEFENDING AGAINST ATTACKS WITH DIFFERENT TRIGGER PATTERNS
In this section, we verify whether DBD is still effective when different trigger patterns are adopted.
Settings. For simplicity, we adopt the BadNets on the CIFAR-10 dataset as an example for the discussion. Specifically, we change the location and size of the backdoor trigger while keeping other settings unchanged to evaluate the BA and ASR before and after our defense.
Results. As shown in Table 21, although there are some fluctuations, the ASR is smaller than 2% while the BA is greater than 92% in every cases. In other words, our method is effective in defending against attacks with different trigger patterns.
G DEFENDING AGAINST ATTACKS WITH DYNAMIC TRIGGERS
In this section, we verify whether DBD is still effective when attackers adopt dynamic triggers.
Settings. We compare DBD with MESA (Qiao et al., 2019) in defending the dynamic attack discussed in (Qiao et al., 2019) on the CIFAR-10 dataset as an example for the discussion. This dynamic attack uses a distribution of triggers instead of a fixed trigger.
Results. The BA and ASR of DBD are 92.4% and 0.4%, while those of MESA are 94.8% and 2.4%. However, we find MESA failed in defending against blended attack (for it can not correctly detect the trigger) whereas DBD is still effective. These results verified the effectiveness of our defense.
H DISCUSSIONS
H.1 EFFECTS OF HYPER-PARAMETERS
Settings. Here we analyze the effect of filtering rate α, which is the only key method-related hyperparameter in our DBD. We adopt the results on the CIFAR-10 dataset for discussion. Except for the studied parameter α, other settings are the same as those used in Section 5.2.
30 40 50 60 Filtering Rate (%)
90 92 B A (% )
30 40 50 60 Filtering Rate (%)
0
1
A SR
(% )
BadNets Blended WaNet Label Consistent
Figure 6: The effects of filtering rate. 0 5 10 15 20 Poisoning Rate (%)
91
92
93
B A
(% )
0 5 10 15 20 Poisoning Rate (%)
1
2
3
A SR
(% )
BadNets Blended WaNet
Figure 7: The effects of poisoning rate.
Results. The number of labeled samples used in the third stage increase with the increase of filtering rate α, while the probability that the filtered high-credible dataset contains poisoned samples also increases. As shown in Figure 6, DBD can still maintain relatively high benign accuracy even when the filtering rate α is relatively small (e.g., 30%). It is mostly due to the high-quality of learned purified feature extractor and the semi-supervised fine-tuning process. DBD can also reach a nearly 0% attack success rate in all cases. However, we also have to notice that the high-credible dataset may contain poisoned samples when α is very large, which in turn creates hidden backdoors again during the fine-tuning process. Defenders should specify α based on their specific needs.
H.2 DEFENDING ATTACKS WITH VARIOUS POISONING RATES
Settings. We evaluate our method in defending against attacks with different poisoning rate γ on CIFAR-10 dataset. Except for γ, other settings are the same as those used in Section 5.2.
I MORE DETAILS ABOUT SIMCLR, SCE, AND MIXMATCH
NT-Xent Loss in SimCLR. Given a sample mini-batch containing N different samples, SimCLR first applies two separate data augmentations toward each sample to obtain 2N augmented samples. The loss for a positive pair of sample (i, j) can be defined as:
Li,j = − log exp (sim (zi, zj) /τ)∑2N
k=1 I{k 6= i} · exp (sim (zi, zk) /τ) , (5)
where sim(·, ·) is the cosine similarity, zi is the feature representation of sample i, τ is the temperature parameter, and I{k 6= i} ∈ {0, 1} indicating whether k 6= i. The NT-Xent Loss is computed across all 2N positive pairs in this mini-batch.
SCE. The symmetric cross entropy (SCE) can be defined as: LSCE = H(p, q) +H(q, p), (6)
where H(p, q) is the cross entropy, H(q, p) is the reversed cross entropy, p is the prediction, and q is the one-hot label (of the evaluated sample).
MixMatch Loss. For a batch X of labeled samples and a batch U of unlabeled samples (|X | = |U|), MixMatch produces a guessed label q̄ for each unlabled sample u ∈ U and applies MixUp (Zhang et al., 2018) to obtain the augmented X ′ and U ′ . The loss LX and LU can be defined as:
LX = 1 |X ′| ∑
(x,q)∈X ′ H (px, q) , (7)
where px is the prediction of x, q is its one-hot label, and H(·, ·) is the cross entropy.
LU = 1 K · |U ′| ∑
(u,q̄)∈U ′ ‖pu − q̄‖22 , (8)
where pu is the prediction of u, q̄ is its guessed one-hot label, and K is the number of classes.
By combining LX with LU , the MixMatch loss can be defined as: L = LX + λU · LU , (9)
where λU is a hyper-parameter.
J COMPUTATIONAL FACILITIES
We conduct all experiments on two Ubuntu 18.04 servers having different GPUs. One has four NVIDIA GeForce RTX 2080 Ti GPUs with 11GB memory (dubbed ‘RTX 2080Ti’) and the another has three NVIDIA Tesla V100 GPUs with 32GB memory (dubbed ‘V100’).
Computational Facilities for Attacks. All experiments are conducted with a single RTX 2080 Ti.
Computational Facilities for Defenses. Since we do not use a memory-efficient implementation of DenseNet-121, we conduct DPSGD experiments on the VGGFace2 dataset with a single V100. Other experiments of baseline defenses are conducted with a single RTX 2080 Ti. For our defense, we adopt PyTorch (Paszke et al., 2019) distributed data-parallel and automatic mixed precision training (Micikevicius et al., 2018) with two RTX 2080 Ti for self-supervised learning on the VGGFace2 dataset. Other experiments are conducted with a single RTX 2080 Ti.
K COMPUTATIONAL COST
In this section, we analyze the computational cost of our method stage by stage, compared to standrad supervised learning.
Stage 1. Self-supervised learning is known to have a higher computational cost than standard supervised learning (Chen et al., 2020a; He et al., 2020). In our experiments, SimCLR requires roughly four times the computational cost of standard supervised learning. Since we intend to get a purified instead of well-trained feature extractor, we train the feature extractor (i.e., backbone) lesser epochs than the original SimCLR to reduce the training time. As described in Section B.3, we find 100 epochs is enough to preserve effectiveness.
Stage 2. Since we freeze the backbone and only train the remaining fully connected layers, the computational cost is roughly 60% of standard supervised learning.
Stage 3. Semi-supervised learning is known to have a extra labeling cost compared with standard supervised learning (Gao et al., 2020). In our experiments, MixMatch requires roughly two times the computation cost of standard supervised learning.
We will explore a more computational efficient training method in our future work.
L COMPARING OUR DBD WITH DETECTION-BASED BACKDOOR DEFENSES
In this paper, we do not intend to filter malicious and benign samples accurately, as we mentioned in Section 4.4. However, we notice that the second stage of our DBD can serve as a detection-based backdoor defense for it can filter poisoned samples. In this section, we compare the filtering ability of our DBD (stage 2) with existing detection-based backdoor defenses.
Settings. We compare our DBD with two representative detection-based methods, including, Spectral Signatures (SS) (Tran et al., 2018) and Activation Clustering (AC) (Chen et al., 2019), on the CIFAR-10 dataset. These detection-based methods (e.g., SS and AC) filter malicious samples from the training set and train the model on non-malicious samples. Specifically, we re-implement SS in PyTorch based on its official code8 and adopt the open-source code9 for AC, following the settings in their original paper. In particular, since SS filters 1.5ε malicious samples for each class, where ε is the key hyper-parameter means the upper bound of the number of poisoned training samples, we adopt different ε for a fair comparison.
Results. As shown in Table 22-23, the filtering performance of DBD is on par with that of SS and AC. DBD is even better than those methods when filtering poisoned samples generated by more complicated attacks (i.e., WaNet and Label-Consistent). Besides, we also conduct the standard training on non-malicious samples filtered by SS and AC. As shown in Table 24, the hidden backdoor will still be created in many cases, even though the detection-based defenses are sometimes accurate.
8https://github.com/MadryLab/backdoor_data_poisoning 9https://github.com/ain-soph/trojanzoo/blob/main/trojanvision/defenses/
backdoor/activation_clustering.py
This is mainly because these methods may not able to remove enough poisoned samples while preserving enough benign samples simultaneously, i.e., there is a trade-off between BA and ASR.
M DBD WITH DIFFERENT SELF-SUPERVISED METHODS
In this paper, we believe that the desired feature extractor is mapping visually similar inputs to similar positions in the feature space, such that poisoned samples will be separated into their source classes. This goal is compatible with that of self-supervised learning. We believe that any selfsupervised learning can be adopted in our method. To further verify this point, we replace the adopted SimCLR with other self-supervised methods in our DBD and examine their performance.
Settings. We replace the SimCLR with two other self-supervised methods, including MoCo-V2 (Chen et al., 2020b) and BYOL (Grill et al., 2020), in our DBD. Except for the adopted selfsupervised method, other settings are the same as those used in Section 5.2.
Results. As shown in Table 25, all DBD variants have similar performances. In other words, our DBD is not sensitive to the selection of self-supervised methods.
N DBD WITH DIFFERENT LABEL-NOISE LEARNING METHODS
In the main manuscript, we adopt SCE as the label-noise learning method in our second stage. In this section, we explore whether our DBD is still effective if other label-noise methods are adopted.
Settings. We replace SCE in our DBD with two other label-noise learning methods, including generalized cross entropy (GCE) (Zhang & Sabuncu, 2018) and active passive loss (APL) (Ma et al., 2020). Specifically, we adopt the combination of NCE+RCE in APL and use the default hyperparameters suggested in their original paper. Except for the adopted label-noise learning method, other settings are the same as those used in Section 5.2.
Results. As shown in Table 26, all DBD variants are effective in reducing backdoor threats (i.e., low ASR) while maintaining high benign accuracy. In other words, our DBD is not sensitive to the selection of label-noise learning methods.
O ANALYZING WHY OUR DBD IS EFFECTIVE IN DEFENDING AGAINST LABEL-CONSISTENT ATTACK
In general, the good defense performance of our DBD method against the label-consistent attack (which is one of the clean-label attacks) can be explained from the following aspects:
Firstly, as shown in Figure 1, there is a common observation across different attacks (including both poisoned- and clean-label attacks) that poisoned samples tend to gather together in the feature space learned by the standard supervised learning. The most intuitive idea of our DBD is to prevent such a gathering in the learned feature space, which is implemented by self-supervised learning. As shown in Figure 1(d), the poisoned samples of label-consistent attack are also separated into different areas in the feature space learned by self-supervised learning. This example gives an intuitive explanation about why our DBD can successfully defend against the label-consistent attack.
Furthermore, it is interesting to explore why the poisoned samples in the label-consistent attack are separated under self-supervised learning since all poisoned samples are from the same target class, rather than from different source classes in poisoned-label attacks. For each poisoned sample in this attack, there are two types of features: the trigger and the benign feature with (untargeted) adversarial perturbations. From the perspective of DNNs, benign samples with (untargeted) adversarial perturbations are similar to samples from different source classes, though these samples look similar from the human’s perspective. Thus, it is not surprising that poisoned samples in clean-label attacks can also be separated under self-supervised learning, just like those in poisoned-label attacks. | 1. What is the focus of the paper regarding backdoor attacks in deep learning?
2. What are the strengths of the proposed approach, particularly in its experimental design and baselines?
3. What are the weaknesses of the paper, especially regarding its assumptions and practicality?
4. How does the reviewer assess the effectiveness of the proposed method in addressing backdoor attacks?
5. Are there any concerns or limitations regarding the modified training procedure suggested by the authors? | Summary Of The Paper
Review | Summary Of The Paper
Summary: authors propose a modification to the training procedure to prevent backdoor attacks. Instead of performing supervised training, they suggest first training the model in a self-supervised way, then in a supervised way on fully connected layers. Later they propose to remove low-credible samples and fine-tune the whole model on the remaining samples with labels. They claim that this procedure eliminates the backdoored inputs that have incorrect labels.
Review
Strengths: I enjoyed reading the paper as it has a good structure, strong sets of experiments and baselines. Instead of searching for the triggers as NC and other papers are trying to do, this paper proposes to address the label discrepancy introduced by backdoor samples.
Weaknesses:
The proposed method modifies an underlying training procedure multiplying training time, which I think is significant for practitioners. Addressing this issue is essential to support the practicality of the method.
The primary assumption of the paper is that learning in semi-supervised learning is safe. However, Carlini [1] 's recent work demonstrates the attacker's effectiveness under the same threat model, i.e., when the attacker is only allowed to poison data. If the poisoning is efficient, then the proposed defense exposes the model to a different attack.
[1] Carlini, N. (2021). Poisoning the Unlabeled Dataset of Semi-Supervised Learning. USENIX Security'21 |
ICLR | Title
Backdoor Defense via Decoupling the Training Process
Abstract
Recent studies have revealed that deep neural networks (DNNs) are vulnerable to backdoor attacks, where attackers embed hidden backdoors in the DNN model by poisoning a few training samples. The attacked model behaves normally on benign samples, whereas its prediction will be maliciously changed when the backdoor is activated. We reveal that poisoned samples tend to cluster together in the feature space of the attacked DNN model, which is mostly due to the endto-end supervised training paradigm. Inspired by this observation, we propose a novel backdoor defense via decoupling the original end-to-end training process into three stages. Specifically, we first learn the backbone of a DNN model via self-supervised learning based on training samples without their labels. The learned backbone will map samples with the same ground-truth label to similar locations in the feature space. Then, we freeze the parameters of the learned backbone and train the remaining fully connected layers via standard training with all (labeled) training samples. Lastly, to further alleviate side-effects of poisoned samples in the second stage, we remove labels of some ‘low-credible’ samples determined based on the learned model and conduct a semi-supervised fine-tuning of the whole model. Extensive experiments on multiple benchmark datasets and DNN models verify that the proposed defense is effective in reducing backdoor threats while preserving high accuracy in predicting benign samples. Our code is available at https://github.com/SCLBD/DBD.
1 INTRODUCTION
Deep learning, especially deep neural networks (DNNs), has been widely adopted in many realms (Wang et al., 2020b; Li et al., 2020a; Wen et al., 2020) for its high effectiveness. In general, the training of DNNs requires a large amount of training samples and computational resources. Accordingly, third-party resources (e.g., third-party data or servers) are usually involved. While the opacity of the training process brings certain convenience, it also introduces new security threats.
Backdoor attack poses a new security threat to the training process of DNNs (Li et al., 2020c). It maliciously manipulates the prediction of the attacked DNNs by poisoning a few training samples. Specifically, backdoor attackers inject the backdoor trigger (i.e., a particular pattern) to some benign training images and change their labels with the attacker-specified target label. The connection between the backdoor trigger and the target label will be learned by DNNs during the training process. In the inference process, the prediction of attacked DNNs will be changed to the target label when the trigger is present, whereas the attacked DNNs will behave normally on benign samples. As such, users are difficult to realize the existence of hidden backdoors and therefore this attack is a serious threat to the practical applications of DNNs.
In this paper, we first investigate backdoor attacks from the hidden feature space. Our preliminary experiments reveal that the backdoor is embedded in the feature space, i.e., samples with the back-
∗The first two authors contributed equally to this work. This work was mostly done when Kunzhe Huang and Yiming Li were the research interns at The Chinese University of Hong Kong, Shenzhen. † indicates corresponding authors: Baoyuan Wu (wubaoyuan@cuhk.edu.cn) and Zhan Qin (qinzhan@zju.edu.cn).
door trigger (dubbed poisoned samples) tend to cluster together in the feature space. We reveal that this phenomenon is mostly due to the end-to-end supervised training paradigm. Specifically, the excessive learning capability allows DNNs to learn features about the backdoor trigger, while the DNNs can shrink the distance between poisoned samples in the feature space and connect the learned trigger-related features with the target label by the end-to-end supervised training. Based on this understanding, we propose to decouple the end-to-end training process for the backdoor defense. Specifically, we treat the DNNs as two disjoint parts, including a feature extractor (i.e., backbone) and a simple classifier (i.e., the remaining fully connected layers). We first learn the purified feature extractor via self-supervised learning (Kolesnikov et al., 2019; Chen et al., 2020a; Jing & Tian, 2020) with unlabeled training samples (obtained by removing their labels), and then learn the simple classifier via standard supervised training process based on the learned feature extractor and all training samples. The strong data augmentations involved in the self-supervised learning damage trigger patterns, making them unlearnable during representation learning; and the decoupling process further disconnects trigger patterns and the target label. Accordingly, hidden backdoors cannot be successfully created even the model is trained on the poisoned dataset based on our defense.
Moreover, we further reveal that the representation of poisoned samples generated by the purified extractor is significantly different from those generated by the extractor learned with standard training process. Specifically, the poisoned sample lies closely to samples with its ground-truth label instead of the target label. This phenomenon makes the training of the simple classifier similar to label-noise learning (Wang et al., 2019b; Ma et al., 2020; Berthon et al., 2021). As such, we first filter high-credible training samples (i.e., training samples that are most probably to be benign) and then use those samples as labeled samples and the remaining part to form unlabeled samples to fine-tune the whole model via semi-supervised learning (Rasmus et al., 2015; Berthelot et al., 2019; Sohn et al., 2020). This approach is to further reduce the adverse effects of poisoned samples.
The main contributions of this paper are three-fold. (1) We reveal that the backdoor is embedded in the feature space, which is mostly due to the end-to-end supervised training paradigm. (2) Based on our understanding, we propose a decoupling-based backdoor defense (DBD) to alleviate the threat of poisoning-based backdoor attacks. (3) Experiments on classical benchmark datasets are conducted, which verify the effectiveness of our defense.
2 RELATED WORK
2.1 BACKDOOR ATTACK
Backdoor attack is an emerging research area, which raises security concerns about training with third-party resources. In this paper, we focus on the poisoning-based backdoor attack towards image classification, where attackers can only modify the dataset instead of other training components (e.g., training loss). This threat could also happen in other tasks (Xiang et al., 2021; Zhai et al., 2021; Li et al., 2022) and with different attacker’s capacities (Nguyen & Tran, 2020; Tang et al., 2020; Zeng et al., 2021a), which are out-of-scope of this paper. In general, existing attacks can be divided into two main categories based on the property of target labels, as follows:
Poison-Label Backdoor Attack. It is currently the most common attack paradigm, where the target label is different from the ground-truth label of poisoned samples. BadNets (Gu et al., 2019) is the first and most representative poison-label attack. Specifically, it randomly selected a few samples from the original benign dataset to generate poisoned samples by stamping the backdoor trigger onto the (benign) image and change their label with an attacker-specified target label. Those generated poisoned samples associated with remaining benign ones were combined to form the poisoned training dataset, which will be delivered to users. After that, (Chen et al., 2017) suggested that the poisoned image should be similar to its benign version for the stealthiness, based on which they proposed the blended attack. Recently, (Xue et al., 2020; Li et al., 2020b; 2021c) further explored how to conduct poison-label backdoor attacks more stealthily. Most recently, a more stealthy and effective attack, the WaNet (Nguyen & Tran, 2021), was proposed. WaNet adopted image warping as the backdoor trigger, which deforms but preserves the image content.
Clean-Label Backdoor Attack. Although the poisoned image generated by poison-label attacks could be similar to its benign version, users may still notice the attack by examining the image-label relationship. To address this problem, Turner et al. (2019) proposed the clean-label attack paradigm, where the target label is consistent with the ground-truth label of poisoned samples. Specifically,
they first leveraged adversarial perturbations or generative models to modify some benign images from the target class and then conducted the standard trigger injection process. This idea was generalized to attack video classification in (Zhao et al., 2020b), where they adopted the targeted universal adversarial perturbation (Moosavi-Dezfooli et al., 2017) as the trigger pattern. Although clean-label backdoor attacks are more stealthy compared with poison-label ones, they usually suffer from relatively poor performance and may even fail in creating backdoors (Li et al., 2020c).
2.2 BACKDOOR DEFENSE
Currently, there are also some approaches to alleviate the backdoor threat. Existing defenses are mostly empirical, which can be divided into five main categories, including (1) detection-based defenses (Xu et al., 2021; Zeng et al., 2021a; Xiang et al., 2022), (2) preprocessing based defenses (Doan et al., 2020; Li et al., 2021b; Zeng et al., 2021b), (3) model reconstruction based defenses (Zhao et al., 2020a; Li et al., 2021a; Zeng et al., 2022), (4) trigger synthesis based defenses (Guo et al., 2020; Dong et al., 2021; Shen et al., 2021), and (5) poison suppression based defenses (Du et al., 2020; Borgnia et al., 2021). Specifically, detection-based defenses examine whether a suspicious DNN or sample is attacked and it will deny the use of malicious objects; Preprocessing based methods intend to damage trigger patterns contained in attack samples to prevent backdoor activation by introducing a preprocessing module before feeding images into DNNs; Model reconstruction based ones aim at removing the hidden backdoor in DNNs by modifying models directly; The fourth type of defenses synthesize potential trigger patterns at first, following by the second stage that the hidden backdoor is eliminated by suppressing their effects; The last type of methods depress the effectiveness of poisoned samples during the training process to prevent the creation of hidden backdoors. In general, our method is most relevant to this type of defenses.
In this paper, we only focus on the last four types of defenses since they directly improve the robustness of DNNs. Besides, there were also few works focusing on certified backdoor defenses (Wang et al., 2020a; Weber et al., 2020). Their robustness is theoretically guaranteed under certain assumptions, which cause these methods to be generally weaker than empirical ones in practice.
2.3 SEMI-SUPERVISED AND SELF-SUPERVISED LEARNING
Semi-supervised Learning. In many real-world applications, the acquisition of labeled data often relies on manual labeling, which is very expensive. In contrast, obtaining unlabeled samples is much easier. To utilize the power of unlabeled samples with labeled ones simultaneously, a great amount of semi-supervised learning methods were proposed (Gao et al., 2017; Berthelot et al., 2019; Van Engelen & Hoos, 2020). Recently, semi-supervised learning was also introduced in improving the security of DNNs (Stanforth et al., 2019; Carmon et al., 2019), where they utilized unlabelled samples in the adversarial training. Most recently, (Yan et al., 2021) discussed how to backdoor semi-supervised learning. However, this approach needs to control other training components (e.g., training loss) in addition to modifying training samples and therefore is out-of-scope of this paper. How to adopt semi-supervised learning for backdoor defense remains blank.
Self-supervised Learning. This learning paradigm is a subset of unsupervised learning, where DNNs are trained with supervised signals generated from the data itself (Chen et al., 2020a; Grill et al., 2020; Liu et al., 2021). It has been adopted for increasing adversarial robustness (Hendrycks et al., 2019; Wu et al., 2021; Shi et al., 2021). Most recently, there were also a few works (Saha et al., 2021; Carlini & Terzis, 2021; Jia et al., 2021) exploring how to backdoor self-supervised learning. However, these attacks are out-of-scope of this paper since they need to control other training components (e.g., training loss) in addition to modifying training samples.
3 REVISITING BACKDOOR ATTACKS FROM THE HIDDEN FEATURE SPACE
In this section, we analyze the behavior of poisoned samples from the hidden feature space of attacked models and discuss its inherent mechanism.
Settings. We conduct the BadNets (Gu et al., 2019) and label-consistent attack (Turner et al., 2019) on CIFAR-10 dataset (Krizhevsky, 2009) for the discussion. They are representative of poison-label attacks and clean-label attacks, respectively. Specifically, we conduct supervised learning on the poisoned datasets with the standard training process and self-supervised learning on the unlabelled
poisoned datasets with SimCLR (Chen et al., 2020a). We visualize poisoned samples in the hidden feature space generated by attacked DNNs based on the t-SNE (Van der Maaten & Hinton, 2008). More detailed settings are presented in Appendix A.
Results. As shown in Figure 1(a)-1(b), poisoned samples (denoted by ‘black-cross’) tend to cluster together to form a separate cluster after the standard supervised training process, no matter under the poison-label attack or clean-label attack. This phenomenon implies why existing poisoning-based backdoor attacks can succeed. Specifically, the excessive learning capability allows DNNs to learn features about the backdoor trigger. Associated with the end-to-end supervised training paradigm, DNNs can shrink the distance between poisoned samples in the feature space and connect the learned trigger-related features with the target label. In contrast, as shown in Figure 1(c)-1(d), poisoned samples lie closely to samples with their ground-truth label after the self-supervised training process on the unlabelled poisoned dataset. It indicates that we can prevent the creation of backdoors by self-supervised learning, which will be further introduced in the next section.
4 DECOUPLING-BASED BACKDOOR DEFENSE
4.1 PRELIMINARIES
General Pipeline of Backdoor Attacks. Let D = {(xi, yi)}Ni=1 denotes the benign training set, where xi ∈ X = {0, 1, . . . , 255}C×W×H is the image, yi ∈ Y = {0, 1, . . . ,K} is its label,K is the number of classes, and yt ∈ Y indicates the target label. How to generate the poisoned datasetDp is the cornerstone of backdoor attacks. Specifically,Dp consists of two subsets, including the modified version of a subset of D and remaining benign samples, i.e., Dp = Dm ∪ Db, where Db ⊂ D, γ , |Dm||D| is the poisoning rate, Dm = {(x
′, yt)|x′ = G(x), (x, y) ∈ D\Db}, and G : X → X is an attacker-predefined poisoned image generator. For example, G(x) = (1−λ)⊗x+λ⊗ t, where λ ∈ [0, 1]C×W×H , t ∈ X is the trigger pattern, and ⊗ is the element-wise product in the blended attack (Chen et al., 2017). Once Dp is generated, it will be sent to users who will train DNNs on it. Hidden backdoors will be created after the training process.
Threat Model. In this paper, we focus on defending against poisoning-based backdoor attacks. The attacker can arbitrarily modify the training set whereas cannot change other training components (e.g., model structure and training loss). For our proposed defense, we assume that defenders can fully control the training process. This is the scenario that users adopt third-party collected samples for training. Note that we do not assume that defenders have a local benign dataset, which is often required in many existing defenses (Wang et al., 2019a; Zhao et al., 2020a; Li et al., 2021a).
Defender’s Goals. The defender’s goals are to prevent the trained DNN model from predicting poisoned samples as the target label and to preserve the high accuracy on benign samples.
4.2 OVERVIEW OF THE DEFENSE PIPELINE
In this section, we describe the general pipeline of our defense. As shown in Figure 2, it consists of three main stages, including (1) learning a purified feature extractor via self-supervised learning, (2) filtering high-credible samples via label-noise learning, and (3) semi-supervised fine-tuning.
Specifically, in the first stage, we remove the label of all training samples to form the unlabelled dataset, based on which to train the feature extractor via self-supervised learning. In the second stage, we freeze the learned feature extractor and adopt all training samples to train the remaining fully connected layers via supervised learning. We then filter α% high-credible samples based on the training loss. The smaller the loss, the more credible the sample. After the second stage, the training set will be separated into two disjoint parts, including high-credible samples and lowcredible samples. We use high-credible samples as labeled samples and remove the label of all low-credible samples to fine-tune the whole model via semi-supervised learning. More detailed information about each stage of our method will be further illustrated in following sections.
4.3 LEARNING PURIFIED FEATURE EXTRACTOR VIA SELF-SUPERVISED LEARNING
Let Dt denotes the training set and fw : X → [0, 1]K indicates the DNN with parameter w = [wc,wf ], wherewc andwf indicates the parameters of the backbone and the fully connected layer, respectively. In this stage, we optimizewc based on the unlabeled version of Dt via self-supervised learning, as follows:
w∗c = arg min wc ∑ (x,y)∈Dt L1(x;wc), (1)
where L1(·) indicates the self-supervised loss (e.g., NT-Xent in SimCLR (Chen et al., 2020a)). Through the self-supervised learning, the learned feature extractor (i.e., backbone) will be purified even if the training set contains poisoned samples, as illustrated in Section 3.
4.4 FILTERING HIGH-CREDIBLE SAMPLES VIA LABEL-NOISE LEARNING
Once w∗c is obtained, the user can freeze it and adopt Dt to further optimize remaining wf , i.e.,
w∗f = arg min wf ∑ (x,y)∈Dt L2 ( f[w∗c ,wf ](x), y ) , (2)
where L2(·) indicates the supervised loss (e.g., cross entropy). After the decoupling-based training process (1)-(2), even if the model is (partly) trained on the poisoned dataset, the hidden backdoor cannot be created since the feature extractor is purified. However, this simple strategy suffers from two main problems. Firstly, compared with the one trained via supervised learning, the accuracy of predicting benign samples will have a certain decrease, since the learned feature extractor is frozen in the second stage. Secondly, poisoned samples will serve as ‘outliers’ to further hinder the learning of the second stage when poison-label attacks appear, since those samples lie close to samples with its ground-truth label instead of the target label in the hidden feature space generated by the learned purified feature extractor. These two problems indicate that we should remove poisoned samples and retrain or fine-tune the whole model.
Specifically, we select high-credible samples Dh based on the loss L2(·; [w∗c ,w∗f ]). The highcredible samples are defined as the α% training samples with the smallest loss, where α ∈ [0, 100] is
a hyper-parameter. In particular, we adopt the symmetric cross-entropy (SCE) (Wang et al., 2019b) as L2(·), inspired by the label-noise learning. As shown in Figure 3, compared with the CE loss, the SCE can significantly increase the differences between poisoned samples and benign ones, which further reduces the possibility that high-credible dataset Dh still contains poisoned samples. Note that we do not intend to accurately separate poisoned samples and benign samples. We only want to ensure that the obtained Dh contains as few poisoned samples as possible.
4.5 SEMI-SUPERVISED FINE-TUNING
After the second stage, the third-party training setDt will be separated into two disjoint parts, including the high-credible dataset Dh and the low-credible dataset Dl , Dt\Dh. Let D̂l , {x|(x, y) ∈ Dl} indicates the unlabeled version of low-credible dataset Dl. We fine-tune the whole trained model f[w∗c ,w∗f ](·) with semi-supervised learning as follows:
min w L3(Dh, D̂l;w), (3)
where L3(·) denotes the semi-supervised loss (e.g., the loss in MixMatch (Berthelot et al., 2019)). This process can prevent the side-effects of poisoned samples while utilizing their contained useful information, and encourage the compatibility between the feature extractor and the simple classifier via learning them jointly instead of separately. Please refer to Section 5.3 for more results.
5 EXPERIMENTS
5.1 EXPERIMENTAL SETTINGS
Datasets and DNNs. We evaluate all defenses on two classical benchmark datasets, including CIFAR-10 (Krizhevsky, 2009) and (a subset of) ImageNet (Deng et al., 2009). We adopt the ResNet18 (He et al., 2016) for these tasks. More detailed settings are presented in Appendix B.1. Besides, we also provide the results on (a subset of) VGGFace2 (Cao et al., 2018) in Appendix C.
Attack Baselines. We examine all defense approaches in defending against four representative attacks. Specifically, we select the BadNets (Gu et al., 2019), the backdoor attack with blended strategy (dubbed ‘Blended’) (Chen et al., 2017), WaNet (Nguyen & Tran, 2021), and label-consistent attack with adversarial perturbations (dubbed ‘Label-Consistent’) (Turner et al., 2019) for the evaluation. They are the representative of patch-based visible and invisible poison-label attacks, nonpatch-based poison-label attacks, and clean-label attacks, respectively.
Defense Baselines. We compared our DBD with two defenses having the same defender’s capacities, including the DPSGD (Du et al., 2020) and ShrinkPad (Li et al., 2021b). We also compare with other two approaches with an additional requirement (i.e., having a local benign dataset), including
the neural cleanse with unlearning strategy (dubbed ‘NC’) (Wang et al., 2019a), and neural attention distillation (dubbed ‘NAD’) (Li et al., 2021a). They are the representative of poison suppression based defenses, preprocessing based defenses, trigger synthesis based defenses, and model reconstruction based defenses, respectively. We also provide results of DNNs trained without any defense (dubbed ‘No Defense’) as another important baseline for reference.
Attack Setups. We use a 2 × 2 square as the trigger pattern on CIFAR-10 dataset and the 32 × 32 Apple logo on ImageNet dataset for the BadNets, as suggested in (Gu et al., 2019; Wang et al., 2019a). For Blended, we adopt the ‘Hello Kitty’ pattern on CIFAR-10 and the random noise pattern on ImageNet, based on the suggestions in (Chen et al., 2017), and set the blended ratio λ = 0.1 on all datasets. The trigger pattern adopted in label-consistent attack is the same as the one used in BadNets. For WaNet, we adopt its default settings on CIFAR-10 dataset. However, on ImageNet dataset, we use different settings optimized by grid-search since the original ones fail. An example of poisoned samples generated by different attacks is shown in Figure 4. Besides, we set the poisoning rate γ1 = 2.5% for label-consistent attack (25% of training samples with the target label) and γ2 = 5% for three other attacks. More details are shown in Appendix B.2.
Defense Setups. For our DBD, we adopt SimCLR (Chen et al., 2020a) as the self-supervised method and MixMatch (Berthelot et al., 2019) as the semi-supervised method. More details about SimCLR and MixMatch are in Appendix I. The filtering rate α is the only key hyper-parameter in DBD, which is set to 50% in all cases. We set the shrinking rate to 10% for the ShrinkPad on all datasets, as suggested in (Li et al., 2021b; Zeng et al., 2021b). In particular, DPSGD and NAD are sensitive to their hyper-parameters. We report their best results in each case based on the grid-search (as shown in Appendix D). Besides, we split a 5% random subset of the benign training set as the local benign dataset for NC and NAD. More implementation details are provided in Appendix B.3.
Evaluation Metrics. We adopt the attack success rate (ASR) and benign accuracy (BA) to measure the effectiveness of all methods1. Specifically, let Dtest indicates the (benign) testing set and Cw : X → Y denotes the trained classifier, we have ASR , Pr(x,y)∈Dtest{Cw(G(x)) = yt|y 6= yt} and BA , Pr(x,y)∈Dtest{Cw(x) = y}, where yt is the target label and G(·) is the poisoned image generator. In particular, the lower the ASR and the higher the BA, the better the defense.
5.2 MAIN RESULTS
Comparing DBD with Defenses having the Same Requirements. As shown in Table 1-2, DBD is significantly better than defenses having the same requirements (i.e., DPSGD and ShrinkPad) in defending against all attacks. For example, the benign accuracy of DBD is 20% over while the attack success rate is 5% less than that of DPSGD in all cases. Specifically, the attack success rate of models with DBD is less than 2% in all cases (mostly < 0.5%), which verifies that our method can successfully prevent the creation of hidden backdoors. Moreover, the decreases of benign accuracy are less than 2% when defending against poison-label attacks, compared with models trained without any defense. Our method is even better on relatively larger dataset where all baseline methods become less effective. These results verify the effectiveness of our method.
1Among all defense methods, the one with the best performance is indicated in boldface and the value with underline denotes the second-best result.
Comparing DBD with Defenses having Extra Requirements. We also compare our defense with two other methods (i.e., NC and NAD), which have an additional requirement that defenders have a benign local dataset. As shown in Table 1-2, NC and NAD are better than DPSGD and ShrinkPad, as we expected, since they adopt additional information from the benign local dataset. In particular, although NAD and NC use additional information, our method is still better than them, even when their performances are tuned to the best while our method only uses the default settings. Specifically, the BA of NC is on par with that of our method. However, it is with the sacrifice of ASR. Especially on ImageNet dataset, NC has limited effects in reducing ASR. In contrast, our method reaches the smallest ASR while its BA is either the highest or the second-highest in almost all cases. These results verify the effectiveness of our method again.
Results. As shown in Figure 7, our method can still prevent the creation of hidden backdoors even when the poisoning rate reaches 20%. Besides, DBD also maintains high benign accuracy. In other words, our method is effective in defending attacks with different strengths.
5.3 ABLATION STUDY
There are four key strategies in DBD, including (1) obtaining purified feature extractor, (2) using SCE instead of CE in the second stage, (3) reducing side-effects of low-credible samples, and (4) fine-tuning the whole model via semi-supervised learning. Here we verify their effectiveness.
Settings. We compare the proposed DBD with its four variants, including (1) DBD without SS, (2) SS with CE, (3) SS with SCE, and (4) SS with SCE + Tuning, on the CIFAR-10 dataset. Specifically, in the first variant, we replace the backbone generated by self-supervised learning with the one trained in a supervised fashion and keep other parts unchanged. In the second variant, we freeze the backbone learned via self-supervised learning and train the remaining fully-connected layers with cross-entropy loss on all training samples. The third variant is similar to the second one. The only difference is that it uses symmetric cross-entropy instead of cross-entropy to train fully-connected layers. The last variant is an advanced version of the third one, which further fine-tunes fullyconnected layers on high-credible samples filtered by the third variant.
Results. As shown in Table 3, we can conclude that decoupling the original end-to-end supervised training process is effective in preventing the creation of hidden backdoors, by comparing our DBD with its first variant and the model trained without any defense. Besides, we can also verify the effectiveness of SCE loss on defending against poison-label backdoor attacks by comparing the second and third DBD variants. Moreover, the fourth DBD variant has relatively lower ASR and BA, compared with the third one. This phenomenon is due to the removal of low-credible samples. It indicates that reducing side-effects of low-credible samples while adopting their useful information is important for the defense. We can also verify that fine-tuning the whole model via semi-supervised learning is also useful by comparing the fourth variant and the proposed DBD.
5.4 RESISTANCE TO POTENTIAL ADAPTIVE ATTACKS
In our paper, we adopted the classical defense setting that attackers have no information about the defense. Attackers may design adaptive attacks if they know the existence of our DBD. The most straightforward idea is to manipulate the self-supervised training process so that poisoned samples are still in a new cluster after the self-supervised learning. However, attackers are not allowed to do it based on our threat model about adopting third-party datasets. Despite this, attackers may design adaptive attacks by optimizing the trigger pattern to make poisoned samples still in a new cluster after the self-supervised learning if they can know the model structure used by defenders, as follows:
Problem Formulation. For a K-classification problem, let X ′ = {xi}Mi=1 indicates the benign images selected for poisoning, Xj = {xi} Nj i=1 denotes the benign images with ground-truth label j, and g is a trained backbone. Given an attacker-predefined poisoned image generator G, the adaptive attack aims to optimize a trigger pattern t by minimizing the distance between poisoned images while maximizing the distance between the center of poisoned images and centers of clusters of benign images with different label, i.e.,
min t
1
M ∑ x∈X ′ d (g(G(x; t)), g′))− 1 K K∑ i=1 d (g′, gi) , (4)
where g′ , 1M ∑ x∈X ′ g(G(x; t)), gi , 1 Ni ∑ x∈Xi g(x), and d is a distance metric.
Settings. We adopt the CIFAR-10 dataset and use the `2 norm as the distance metric to conduct the experiment. Specifically, we assume that attackers have the entire benign dataset, based on which they can train a backbone adopted in the first stage of our DBD. We use the Adam optimizer to solve the above optimization problem for 100 epochs with a learning rate of 0.1. The trigger size is set to 32×32, which means the attacker can completely modify the content of poisoned samples, regardless of its original semantic information and the stealthiness of the attack. This setting is to ensure the attack ability, since clustering poisoned samples together is very difficult in self-supervised learning.
Results. The adaptive attack works well when there is no defense (BA=94.96%, ASR=99.70%). However, this attack still fails to attack our DBD (BA=93.21%, ASR=1.02%). In other words, our defense is resistant to this adaptive attack. It is most probably because the trigger optimized based on the backbone is far less effective when the model is retrained since model parameters are changed due to the random initialization and the update of model weights during the training process.
6 CONCLUSION
The mechanism of poisoning-based backdoor attacks is to establish a latent connection between trigger patterns and the target label during the training process. In this paper, we revealed that this connection is learned mostly due to the end-to-end supervised training paradigm. Motivated by this understanding, we proposed a decoupling-based backdoor defense, which first learns the backbone via self-supervised learning and then the remaining fully-connected layers by the classical supervised learning. We also introduced the label-noise learning method to determine high-credible and low-credible samples, based on which we fine-tuned the whole model via semi-supervised learning. Extensive experiments verify that our defense is effective on reducing backdoor threats while preserving high accuracy on predicting benign samples.
ACKNOWLEDGMENTS
Baoyuan Wu is supported in part by the National Natural Science Foundation of China under Grant 62076213, the University Development Fund of the Chinese University of Hong Kong, Shenzhen under Grant 01001810, and the Special Project Fund of Shenzhen Research Institute of Big Data under Grant T00120210003. Zhan Qin is supported in part by the National Natural Science Foundation of China under Grant U20A20178, the National Key Research and Development Program of China under Grant 2020AAA0107705, and the Research Laboratory for Data Security and Privacy, Zhejiang University-Ant Financial Fintech Center. Kui Ren is supported by the National Key Research and Development Program of China under Grant 2020AAA0107705.
ETHICS STATEMENT
DNNs are widely adopted in many mission-critical areas (e.g., face recognition) and therefore their security is of great significance. The vulnerability of DNNs to backdoor attacks raises serious concerns about using third-party training resources. In this paper, we propose a general training pipeline to obtain backdoor-free DNNs, even if the training dataset contains poisoned samples. This work has no ethical issues in general since our method is purely defensive and does not reveal any new vulnerabilities of DNNs. However, we need to mention that our defense can be adopted only when training with untrusted samples, and backdoor attacks could happen in other scenarios. People should not be too optimistic about eliminating backdoor threats.
REPRODUCIBILITY STATEMENT
The detailed descriptions of datasets, models, and training settings are in Appendix A-D. We also describe the computational facilities and cost in Appendix J-K. Codes of our DBD are also opensourced.
A DETAILED SETTINGS FOR REVISITING BACKDOOR ATTACKS
Attack Setups. We conduct the BadNets (Gu et al., 2019) and label-consistent attack (Turner et al., 2019) with the target label yt = 3 on the CIFAR-10 dataset (Krizhevsky, 2009). The trigger patterns are the same as those presented in Section 5.2. In particular, we implement the label-consistent attack with adversarial perturbations, as suggested in its original paper (Turner et al., 2019). Specifically, we used the projected gradient descent (PGD) (Madry et al., 2018) to generate adversarial perturbations within the `∞-ball where the maximum perturbation size = 16.
Training Setups. We conduct supervised learning on the poisoned datasets with the standard training process and the self-supervised learning on the unlabelled poisoned datasets with the SimCLR (Chen et al., 2020a). The supervised training is conducted based on the open-source code2. Specifically, we use the SGD optimizer with momentum 0.9, weight decay of 5 × 10−4, and an initial learning rate of 0.1. The batch size is set to 128 and we train the ResNet-18 model 200 epochs. The learning rate is decreased by a factor of 10 at epoch 100 and 150, respectively. Besides, we add triggers before performing the data augmentation (e.g., random crop and horizontal flipping). For the self-supervised training, we use the stochastic gradient descent (SGD) optimizer with a momentum of 0.9, an initial learning rate of 0.4, and a weight decay factor of 5 × 10−4. We use a batch size of 512, and train the backbone for 1,000 epochs. We decay the learning rate with the cosine decay schedule (Loshchilov & Hutter, 2016) without a restart. Besides, we also adopt strong data augmentation techniques, including random crop and resize (with random flip), color distortions, and Gaussian blur, as suggested in (Chen et al., 2020a). All models are trained until converge.
t-SNE Visualization Settings. We treat the output of the last residual unit as the feature representation and use the tsne-cuda library (Chan et al., 2019) to get the feature embedding of all samples. To have a better visualization, we adopt all poisoned samples and randomly select 10% benign samples for visualizing models under the supervised learning, and adopt 30% poisoned samples and 10% benign samples for those under the self-supervised learning.
B DETAILED SETTINGS FOR MAIN EXPERIMENTS
B.1 MORE DETAILS ABOUT DATASETS AND DNNS
Due to the limitations of computational resources and time, we adopt a subset randomly selected from the original ImageNet. More detailed information about the datasets and DNNs adopted in the main experiments of our paper is presented in Table 4.
B.2 MORE DETAILS ABOUT ATTACK SETTINGS
Attack Setups. We conduct the BadNets (Gu et al., 2019), blended attack (dubbed ‘Blended’) (Chen et al., 2017), label-consistent attack (dubbed ‘Label-Consistent’) (Turner et al., 2019), and WaNet (Nguyen & Tran, 2021) with the target label yt = 3 on all datasets. The trigger patterns are the same as those presented in Section 5.2. In particular, we set the blended ratio λ = 0.1 for the blended attack on all datasets and examine label-consistent attack with the maximum perturbation size ∈ {16, 32}. Besides, WaNet assumed that attackers can fully control the whole training process in its original paper. However, we found that WaNet only modified training data while other training components (e.g., training loss, training schedule, and model structure) are the same as those used in the standard training process. As such, we re-implement its code in the poisoningbased attack scenario based on its official code3. Specifically, following the settings in its original paper, we set the noise rate ρn = 0.2, control grid size k = 4, and warping strength s = 0.5 on
2https://github.com/kuangliu/pytorch-cifar 3https://github.com/VinAIResearch/Warping-based_Backdoor_Attack-release
the CIFAR-10 dataset. However, we found that the default k and s are too small to make the attack works on the ImageNet dataset (as shown in Table 5-6). Besides, the ‘noise mode’ also significantly reduces the attack effectiveness (as shown in Table 7). As such, we set k = 224 and s = 1 and train models without the noise mode on the ImageNet dataset.
Training Setups. On the CIFAR-10 dataset (Krizhevsky, 2009), the settings are the same as those described in Section A; On the ImageNet dataset (Deng et al., 2009), we conduct experiments based on the open-source code4. Specifically, we use the SGD optimizer with momentum 0.9, weight decay of 10−4, and an initial learning rate of 0.1. The batch size is set to 256 and we train the ResNet-18 model 90 epochs. The learning rate is decreased by a factor of 10 at epoch 30 and 60, respectively. Besides, since the raw images in the ImageNet dataset are of different sizes, we resize them to 3× 224× 224 before adding triggers.
B.3 MORE DETAILS ABOUT DEFENSE SETTINGS
Settings for NC. We conduct reverse engineering and anomaly detection based on its open-source code5. We implement the ‘unlearning’ method to patch attacked models, as suggested in its paper (Wang et al., 2019a). We randomly select 5% benign training samples as the local benign dataset, which is used in the ‘unlearning’ process. Unless otherwise specified, other settings are the same as those used in (Wang et al., 2019a).
Settings for NAD. We implement this method based on its open-source code6. The origin NAD only conducted experiments on the WideResNet model. In our paper, we calculate the NAD loss over the last residual group for the ResNet-18. The local benign dataset is the same as the one adopted in NC, which is used in the fine-tuning and distillation process of NAD. Unless otherwise specified, other settings are the same as those used in (Li et al., 2021a).
Settings for DPSGD. The original DPSGD was conducted on the MNIST dataset implemented by the TensorFlow Framework. In this paper, we re-implement it based on the differentially private SGD method provided by the Opacus7. Specifically, we replace the original SGD optimizer with the differentially private one, as suggested in (Du et al., 2020). There are two important hyperparameters in DPSGD, including noise scales σ and the clipping bound C. In the experiments, we set C = 1 and select the best σ by the grid-search.
4https://github.com/pytorch/examples/tree/master/imagenet 5https://github.com/bolunwang/backdoor 6https://github.com/bboylyg/NAD 7https://github.com/pytorch/opacus
Settings for ShrinkPad. We set the shrinking rate to 10% on all datasets, as suggested in (Li et al., 2021b; Zeng et al., 2021b). Following their settings, we pad 0-pixels at the bottom right of the shrunk image to expand it to its original size.
Settings for our Defense. In this first stage, We adopt SimCLR (Chen et al., 2020a) to perform self-supervised learning. We train backbones 100 instead of 1,000 epochs to reduce computational costs while preserving effectiveness. Other settings are the same as those described in Section A. We use the same settings across all datasets, models, and attacks; In the second stage, we use the Adam optimizer with a learning rate of 0.002 and set the batch size to 128. We train the fully connected layers 10 epochs with the SCE loss (Wang et al., 2019b). Two hyper-parameters involved in the SCE (i.e., α and β in the original paper) are set to 0.1 and 1, respectively. After that, we filter 50% high-credible samples. We use the same settings across all datasets, models, and attacks; In the third stage, we adopt the MixMatch (Berthelot et al., 2019) for semi-supervised fine-tuning with settings suggested in its original paper. Specifically, we use the Adam optimizer with a learning rate of 0.002, the batch size of 64, and finetune the model 190 epochs on the CIFAR-10 and 80 epochs on the ImageNet dataset, respectively. We set the temperature T = 0.5 and the weight of unsupervised loss λu = 15 on the CIFAR-10 and λu = 6 on the ImageNet dataset, respectively. Moreover, we re-filter high-credible samples after every epoch of the third stage based on the SCE loss.
C DEFENDING AGAINST ATTACKS ON VGGFACE2 DATASET
Dataset and DNN. Due to the limitations of computational resources and time, we adopt a subset randomly selected from the original VGGFace2 (Cao et al., 2018). More details are in Table 8.
Settings for Attacks. For the training of models on the VGGFace2 dataset, the batch size is set to 32 and we conduct experiments on the DenseNet-121 model (Huang et al., 2017). An example of poisoned samples generated by different attacks are in Figure 5. Other settings are the same as those used on the ImageNet dataset.
Settings for Defenses. For NAD, we calculate the NAD loss over the second to last layer for the DenseNet-121. Other settings are the same as those used on the ImageNet dataset.
Results. As shown in Table 9, our defense still reaches the best performance even compared with NC and NAD. Specifically, the BA of NC is on par with that of our method whereas it is with the sacrifice of ASR. These results verify the effectiveness of our defense again.
D SEARCHING BEST RESULTS FOR DPSGD AND NAD
The effectiveness of DPSGD and NAD is sensitive to their hyper-parameters. Here we search for their best results based on the criteria that ‘BA − ASR’ reaches the highest value after the defense.
D.1 SEARCHING BEST RESULTS FOR DPSGD
In general, the larger the σ, the smaller the ASR while also the smaller the BA. The results of DPSGD are shown in Table 10-12, where the best results are marked in boldface.
D.2 SEARCHING BEST RESULTS FOR NAD
We found that the fine-tuning stage of NAD is sensitive to the learning rate. We search the best initial learning rate from {0.1, 0.01, 0.001}. As shown in Table 13-15, a very large learning rate significantly reduces the BA, while a very small learning rate can not reduce the ASR effectively. To keep a relatively large BA while maintaining a small ASR, we set η = 0.01 in the fine-tuning stage.
The distillation stage of NAD is also sensitive to its hyper-parameter β. We select the best β via the grid-search. The results are shown in Table 16-19.
E DEFENDING AGAINST LABEL-CONSISTENT ATTACK WITH A SMALLER POISONING RATE
For the label-consistent attack, except for the 2.5% poisoning rate examined in the main manuscript, 0.6% is also an important setting provided in its original paper (Turner et al., 2019). In this section, we compare different defenses against the label-consistent attack with poisoning rate γ = 0.6%.
As shown in Table 20, when defending against label-consistent attack with a 0.6% poisoning rate, our method is still significantly better than defenses having the same requirements (i.e., DPSGD and ShrinkPad). Even compared with those having the additional requirement (i.e., NC and NAD) under their best settings, our defense is still better or on par with them under the default settings. These results verify the effectiveness of our method again.
F DEFENDING AGAINST ATTACKS WITH DIFFERENT TRIGGER PATTERNS
In this section, we verify whether DBD is still effective when different trigger patterns are adopted.
Settings. For simplicity, we adopt the BadNets on the CIFAR-10 dataset as an example for the discussion. Specifically, we change the location and size of the backdoor trigger while keeping other settings unchanged to evaluate the BA and ASR before and after our defense.
Results. As shown in Table 21, although there are some fluctuations, the ASR is smaller than 2% while the BA is greater than 92% in every cases. In other words, our method is effective in defending against attacks with different trigger patterns.
G DEFENDING AGAINST ATTACKS WITH DYNAMIC TRIGGERS
In this section, we verify whether DBD is still effective when attackers adopt dynamic triggers.
Settings. We compare DBD with MESA (Qiao et al., 2019) in defending the dynamic attack discussed in (Qiao et al., 2019) on the CIFAR-10 dataset as an example for the discussion. This dynamic attack uses a distribution of triggers instead of a fixed trigger.
Results. The BA and ASR of DBD are 92.4% and 0.4%, while those of MESA are 94.8% and 2.4%. However, we find MESA failed in defending against blended attack (for it can not correctly detect the trigger) whereas DBD is still effective. These results verified the effectiveness of our defense.
H DISCUSSIONS
H.1 EFFECTS OF HYPER-PARAMETERS
Settings. Here we analyze the effect of filtering rate α, which is the only key method-related hyperparameter in our DBD. We adopt the results on the CIFAR-10 dataset for discussion. Except for the studied parameter α, other settings are the same as those used in Section 5.2.
30 40 50 60 Filtering Rate (%)
90 92 B A (% )
30 40 50 60 Filtering Rate (%)
0
1
A SR
(% )
BadNets Blended WaNet Label Consistent
Figure 6: The effects of filtering rate. 0 5 10 15 20 Poisoning Rate (%)
91
92
93
B A
(% )
0 5 10 15 20 Poisoning Rate (%)
1
2
3
A SR
(% )
BadNets Blended WaNet
Figure 7: The effects of poisoning rate.
Results. The number of labeled samples used in the third stage increase with the increase of filtering rate α, while the probability that the filtered high-credible dataset contains poisoned samples also increases. As shown in Figure 6, DBD can still maintain relatively high benign accuracy even when the filtering rate α is relatively small (e.g., 30%). It is mostly due to the high-quality of learned purified feature extractor and the semi-supervised fine-tuning process. DBD can also reach a nearly 0% attack success rate in all cases. However, we also have to notice that the high-credible dataset may contain poisoned samples when α is very large, which in turn creates hidden backdoors again during the fine-tuning process. Defenders should specify α based on their specific needs.
H.2 DEFENDING ATTACKS WITH VARIOUS POISONING RATES
Settings. We evaluate our method in defending against attacks with different poisoning rate γ on CIFAR-10 dataset. Except for γ, other settings are the same as those used in Section 5.2.
I MORE DETAILS ABOUT SIMCLR, SCE, AND MIXMATCH
NT-Xent Loss in SimCLR. Given a sample mini-batch containing N different samples, SimCLR first applies two separate data augmentations toward each sample to obtain 2N augmented samples. The loss for a positive pair of sample (i, j) can be defined as:
Li,j = − log exp (sim (zi, zj) /τ)∑2N
k=1 I{k 6= i} · exp (sim (zi, zk) /τ) , (5)
where sim(·, ·) is the cosine similarity, zi is the feature representation of sample i, τ is the temperature parameter, and I{k 6= i} ∈ {0, 1} indicating whether k 6= i. The NT-Xent Loss is computed across all 2N positive pairs in this mini-batch.
SCE. The symmetric cross entropy (SCE) can be defined as: LSCE = H(p, q) +H(q, p), (6)
where H(p, q) is the cross entropy, H(q, p) is the reversed cross entropy, p is the prediction, and q is the one-hot label (of the evaluated sample).
MixMatch Loss. For a batch X of labeled samples and a batch U of unlabeled samples (|X | = |U|), MixMatch produces a guessed label q̄ for each unlabled sample u ∈ U and applies MixUp (Zhang et al., 2018) to obtain the augmented X ′ and U ′ . The loss LX and LU can be defined as:
LX = 1 |X ′| ∑
(x,q)∈X ′ H (px, q) , (7)
where px is the prediction of x, q is its one-hot label, and H(·, ·) is the cross entropy.
LU = 1 K · |U ′| ∑
(u,q̄)∈U ′ ‖pu − q̄‖22 , (8)
where pu is the prediction of u, q̄ is its guessed one-hot label, and K is the number of classes.
By combining LX with LU , the MixMatch loss can be defined as: L = LX + λU · LU , (9)
where λU is a hyper-parameter.
J COMPUTATIONAL FACILITIES
We conduct all experiments on two Ubuntu 18.04 servers having different GPUs. One has four NVIDIA GeForce RTX 2080 Ti GPUs with 11GB memory (dubbed ‘RTX 2080Ti’) and the another has three NVIDIA Tesla V100 GPUs with 32GB memory (dubbed ‘V100’).
Computational Facilities for Attacks. All experiments are conducted with a single RTX 2080 Ti.
Computational Facilities for Defenses. Since we do not use a memory-efficient implementation of DenseNet-121, we conduct DPSGD experiments on the VGGFace2 dataset with a single V100. Other experiments of baseline defenses are conducted with a single RTX 2080 Ti. For our defense, we adopt PyTorch (Paszke et al., 2019) distributed data-parallel and automatic mixed precision training (Micikevicius et al., 2018) with two RTX 2080 Ti for self-supervised learning on the VGGFace2 dataset. Other experiments are conducted with a single RTX 2080 Ti.
K COMPUTATIONAL COST
In this section, we analyze the computational cost of our method stage by stage, compared to standrad supervised learning.
Stage 1. Self-supervised learning is known to have a higher computational cost than standard supervised learning (Chen et al., 2020a; He et al., 2020). In our experiments, SimCLR requires roughly four times the computational cost of standard supervised learning. Since we intend to get a purified instead of well-trained feature extractor, we train the feature extractor (i.e., backbone) lesser epochs than the original SimCLR to reduce the training time. As described in Section B.3, we find 100 epochs is enough to preserve effectiveness.
Stage 2. Since we freeze the backbone and only train the remaining fully connected layers, the computational cost is roughly 60% of standard supervised learning.
Stage 3. Semi-supervised learning is known to have a extra labeling cost compared with standard supervised learning (Gao et al., 2020). In our experiments, MixMatch requires roughly two times the computation cost of standard supervised learning.
We will explore a more computational efficient training method in our future work.
L COMPARING OUR DBD WITH DETECTION-BASED BACKDOOR DEFENSES
In this paper, we do not intend to filter malicious and benign samples accurately, as we mentioned in Section 4.4. However, we notice that the second stage of our DBD can serve as a detection-based backdoor defense for it can filter poisoned samples. In this section, we compare the filtering ability of our DBD (stage 2) with existing detection-based backdoor defenses.
Settings. We compare our DBD with two representative detection-based methods, including, Spectral Signatures (SS) (Tran et al., 2018) and Activation Clustering (AC) (Chen et al., 2019), on the CIFAR-10 dataset. These detection-based methods (e.g., SS and AC) filter malicious samples from the training set and train the model on non-malicious samples. Specifically, we re-implement SS in PyTorch based on its official code8 and adopt the open-source code9 for AC, following the settings in their original paper. In particular, since SS filters 1.5ε malicious samples for each class, where ε is the key hyper-parameter means the upper bound of the number of poisoned training samples, we adopt different ε for a fair comparison.
Results. As shown in Table 22-23, the filtering performance of DBD is on par with that of SS and AC. DBD is even better than those methods when filtering poisoned samples generated by more complicated attacks (i.e., WaNet and Label-Consistent). Besides, we also conduct the standard training on non-malicious samples filtered by SS and AC. As shown in Table 24, the hidden backdoor will still be created in many cases, even though the detection-based defenses are sometimes accurate.
8https://github.com/MadryLab/backdoor_data_poisoning 9https://github.com/ain-soph/trojanzoo/blob/main/trojanvision/defenses/
backdoor/activation_clustering.py
This is mainly because these methods may not able to remove enough poisoned samples while preserving enough benign samples simultaneously, i.e., there is a trade-off between BA and ASR.
M DBD WITH DIFFERENT SELF-SUPERVISED METHODS
In this paper, we believe that the desired feature extractor is mapping visually similar inputs to similar positions in the feature space, such that poisoned samples will be separated into their source classes. This goal is compatible with that of self-supervised learning. We believe that any selfsupervised learning can be adopted in our method. To further verify this point, we replace the adopted SimCLR with other self-supervised methods in our DBD and examine their performance.
Settings. We replace the SimCLR with two other self-supervised methods, including MoCo-V2 (Chen et al., 2020b) and BYOL (Grill et al., 2020), in our DBD. Except for the adopted selfsupervised method, other settings are the same as those used in Section 5.2.
Results. As shown in Table 25, all DBD variants have similar performances. In other words, our DBD is not sensitive to the selection of self-supervised methods.
N DBD WITH DIFFERENT LABEL-NOISE LEARNING METHODS
In the main manuscript, we adopt SCE as the label-noise learning method in our second stage. In this section, we explore whether our DBD is still effective if other label-noise methods are adopted.
Settings. We replace SCE in our DBD with two other label-noise learning methods, including generalized cross entropy (GCE) (Zhang & Sabuncu, 2018) and active passive loss (APL) (Ma et al., 2020). Specifically, we adopt the combination of NCE+RCE in APL and use the default hyperparameters suggested in their original paper. Except for the adopted label-noise learning method, other settings are the same as those used in Section 5.2.
Results. As shown in Table 26, all DBD variants are effective in reducing backdoor threats (i.e., low ASR) while maintaining high benign accuracy. In other words, our DBD is not sensitive to the selection of label-noise learning methods.
O ANALYZING WHY OUR DBD IS EFFECTIVE IN DEFENDING AGAINST LABEL-CONSISTENT ATTACK
In general, the good defense performance of our DBD method against the label-consistent attack (which is one of the clean-label attacks) can be explained from the following aspects:
Firstly, as shown in Figure 1, there is a common observation across different attacks (including both poisoned- and clean-label attacks) that poisoned samples tend to gather together in the feature space learned by the standard supervised learning. The most intuitive idea of our DBD is to prevent such a gathering in the learned feature space, which is implemented by self-supervised learning. As shown in Figure 1(d), the poisoned samples of label-consistent attack are also separated into different areas in the feature space learned by self-supervised learning. This example gives an intuitive explanation about why our DBD can successfully defend against the label-consistent attack.
Furthermore, it is interesting to explore why the poisoned samples in the label-consistent attack are separated under self-supervised learning since all poisoned samples are from the same target class, rather than from different source classes in poisoned-label attacks. For each poisoned sample in this attack, there are two types of features: the trigger and the benign feature with (untargeted) adversarial perturbations. From the perspective of DNNs, benign samples with (untargeted) adversarial perturbations are similar to samples from different source classes, though these samples look similar from the human’s perspective. Thus, it is not surprising that poisoned samples in clean-label attacks can also be separated under self-supervised learning, just like those in poisoned-label attacks. | 1. What is the focus of the paper regarding self-supervised learning and noisy labels?
2. What are the strengths of the proposed approach, particularly its intuition and ease of extension?
3. What are the weaknesses of the paper, such as the lack of theoretical analysis and limited comparison with other methods?
4. How does the reviewer assess the effectiveness of the proposed method in the experiment section?
5. Are there any suggestions or recommendations for future research directions related to the proposed method? | Summary Of The Paper
Review | Summary Of The Paper
The paper uses self-supervised learning to get benign representation and then uses a noisy label algorithm (SCE) to optimize the prediction model. The empirical performance shows the effectiveness of the proposed method.
Review
Strengths:
The proposed method is quite intuitive and easy to be extended to the more advanced methods.
The experiment part shows the effectiveness of the proposed method.
Weakness:
The paper lacks a theoretical analysis of the proposed method. I understand that this paper mainly focused on empirical performance, but * it is quite surprising that the proposed methods perform well on label-consistent attacks. This is because that proposed method decouples the label corruption and feature corruption. When label corruption no longer exists, what is the advantage of the proposed method?
For the second step label noise learning, there are also many choices instead of just using the symmetric cross-entropy method. Investigating more noisy-label algorithms might be interesting.
The two-step method makes the algorithm not in an end-to-end fashion. It would be interesting to investigate the possibility to make an end-to-end algorithm.
I do not think excluding the detection-based method is fair since those methods are strong baselines especially for the badnet and blending attack. Also, since the main contribution of the paper is the empirical performance, it is necessary to compare with different kinds of baselines. |
ICLR | Title
Backdoor Defense via Decoupling the Training Process
Abstract
Recent studies have revealed that deep neural networks (DNNs) are vulnerable to backdoor attacks, where attackers embed hidden backdoors in the DNN model by poisoning a few training samples. The attacked model behaves normally on benign samples, whereas its prediction will be maliciously changed when the backdoor is activated. We reveal that poisoned samples tend to cluster together in the feature space of the attacked DNN model, which is mostly due to the endto-end supervised training paradigm. Inspired by this observation, we propose a novel backdoor defense via decoupling the original end-to-end training process into three stages. Specifically, we first learn the backbone of a DNN model via self-supervised learning based on training samples without their labels. The learned backbone will map samples with the same ground-truth label to similar locations in the feature space. Then, we freeze the parameters of the learned backbone and train the remaining fully connected layers via standard training with all (labeled) training samples. Lastly, to further alleviate side-effects of poisoned samples in the second stage, we remove labels of some ‘low-credible’ samples determined based on the learned model and conduct a semi-supervised fine-tuning of the whole model. Extensive experiments on multiple benchmark datasets and DNN models verify that the proposed defense is effective in reducing backdoor threats while preserving high accuracy in predicting benign samples. Our code is available at https://github.com/SCLBD/DBD.
1 INTRODUCTION
Deep learning, especially deep neural networks (DNNs), has been widely adopted in many realms (Wang et al., 2020b; Li et al., 2020a; Wen et al., 2020) for its high effectiveness. In general, the training of DNNs requires a large amount of training samples and computational resources. Accordingly, third-party resources (e.g., third-party data or servers) are usually involved. While the opacity of the training process brings certain convenience, it also introduces new security threats.
Backdoor attack poses a new security threat to the training process of DNNs (Li et al., 2020c). It maliciously manipulates the prediction of the attacked DNNs by poisoning a few training samples. Specifically, backdoor attackers inject the backdoor trigger (i.e., a particular pattern) to some benign training images and change their labels with the attacker-specified target label. The connection between the backdoor trigger and the target label will be learned by DNNs during the training process. In the inference process, the prediction of attacked DNNs will be changed to the target label when the trigger is present, whereas the attacked DNNs will behave normally on benign samples. As such, users are difficult to realize the existence of hidden backdoors and therefore this attack is a serious threat to the practical applications of DNNs.
In this paper, we first investigate backdoor attacks from the hidden feature space. Our preliminary experiments reveal that the backdoor is embedded in the feature space, i.e., samples with the back-
∗The first two authors contributed equally to this work. This work was mostly done when Kunzhe Huang and Yiming Li were the research interns at The Chinese University of Hong Kong, Shenzhen. † indicates corresponding authors: Baoyuan Wu (wubaoyuan@cuhk.edu.cn) and Zhan Qin (qinzhan@zju.edu.cn).
door trigger (dubbed poisoned samples) tend to cluster together in the feature space. We reveal that this phenomenon is mostly due to the end-to-end supervised training paradigm. Specifically, the excessive learning capability allows DNNs to learn features about the backdoor trigger, while the DNNs can shrink the distance between poisoned samples in the feature space and connect the learned trigger-related features with the target label by the end-to-end supervised training. Based on this understanding, we propose to decouple the end-to-end training process for the backdoor defense. Specifically, we treat the DNNs as two disjoint parts, including a feature extractor (i.e., backbone) and a simple classifier (i.e., the remaining fully connected layers). We first learn the purified feature extractor via self-supervised learning (Kolesnikov et al., 2019; Chen et al., 2020a; Jing & Tian, 2020) with unlabeled training samples (obtained by removing their labels), and then learn the simple classifier via standard supervised training process based on the learned feature extractor and all training samples. The strong data augmentations involved in the self-supervised learning damage trigger patterns, making them unlearnable during representation learning; and the decoupling process further disconnects trigger patterns and the target label. Accordingly, hidden backdoors cannot be successfully created even the model is trained on the poisoned dataset based on our defense.
Moreover, we further reveal that the representation of poisoned samples generated by the purified extractor is significantly different from those generated by the extractor learned with standard training process. Specifically, the poisoned sample lies closely to samples with its ground-truth label instead of the target label. This phenomenon makes the training of the simple classifier similar to label-noise learning (Wang et al., 2019b; Ma et al., 2020; Berthon et al., 2021). As such, we first filter high-credible training samples (i.e., training samples that are most probably to be benign) and then use those samples as labeled samples and the remaining part to form unlabeled samples to fine-tune the whole model via semi-supervised learning (Rasmus et al., 2015; Berthelot et al., 2019; Sohn et al., 2020). This approach is to further reduce the adverse effects of poisoned samples.
The main contributions of this paper are three-fold. (1) We reveal that the backdoor is embedded in the feature space, which is mostly due to the end-to-end supervised training paradigm. (2) Based on our understanding, we propose a decoupling-based backdoor defense (DBD) to alleviate the threat of poisoning-based backdoor attacks. (3) Experiments on classical benchmark datasets are conducted, which verify the effectiveness of our defense.
2 RELATED WORK
2.1 BACKDOOR ATTACK
Backdoor attack is an emerging research area, which raises security concerns about training with third-party resources. In this paper, we focus on the poisoning-based backdoor attack towards image classification, where attackers can only modify the dataset instead of other training components (e.g., training loss). This threat could also happen in other tasks (Xiang et al., 2021; Zhai et al., 2021; Li et al., 2022) and with different attacker’s capacities (Nguyen & Tran, 2020; Tang et al., 2020; Zeng et al., 2021a), which are out-of-scope of this paper. In general, existing attacks can be divided into two main categories based on the property of target labels, as follows:
Poison-Label Backdoor Attack. It is currently the most common attack paradigm, where the target label is different from the ground-truth label of poisoned samples. BadNets (Gu et al., 2019) is the first and most representative poison-label attack. Specifically, it randomly selected a few samples from the original benign dataset to generate poisoned samples by stamping the backdoor trigger onto the (benign) image and change their label with an attacker-specified target label. Those generated poisoned samples associated with remaining benign ones were combined to form the poisoned training dataset, which will be delivered to users. After that, (Chen et al., 2017) suggested that the poisoned image should be similar to its benign version for the stealthiness, based on which they proposed the blended attack. Recently, (Xue et al., 2020; Li et al., 2020b; 2021c) further explored how to conduct poison-label backdoor attacks more stealthily. Most recently, a more stealthy and effective attack, the WaNet (Nguyen & Tran, 2021), was proposed. WaNet adopted image warping as the backdoor trigger, which deforms but preserves the image content.
Clean-Label Backdoor Attack. Although the poisoned image generated by poison-label attacks could be similar to its benign version, users may still notice the attack by examining the image-label relationship. To address this problem, Turner et al. (2019) proposed the clean-label attack paradigm, where the target label is consistent with the ground-truth label of poisoned samples. Specifically,
they first leveraged adversarial perturbations or generative models to modify some benign images from the target class and then conducted the standard trigger injection process. This idea was generalized to attack video classification in (Zhao et al., 2020b), where they adopted the targeted universal adversarial perturbation (Moosavi-Dezfooli et al., 2017) as the trigger pattern. Although clean-label backdoor attacks are more stealthy compared with poison-label ones, they usually suffer from relatively poor performance and may even fail in creating backdoors (Li et al., 2020c).
2.2 BACKDOOR DEFENSE
Currently, there are also some approaches to alleviate the backdoor threat. Existing defenses are mostly empirical, which can be divided into five main categories, including (1) detection-based defenses (Xu et al., 2021; Zeng et al., 2021a; Xiang et al., 2022), (2) preprocessing based defenses (Doan et al., 2020; Li et al., 2021b; Zeng et al., 2021b), (3) model reconstruction based defenses (Zhao et al., 2020a; Li et al., 2021a; Zeng et al., 2022), (4) trigger synthesis based defenses (Guo et al., 2020; Dong et al., 2021; Shen et al., 2021), and (5) poison suppression based defenses (Du et al., 2020; Borgnia et al., 2021). Specifically, detection-based defenses examine whether a suspicious DNN or sample is attacked and it will deny the use of malicious objects; Preprocessing based methods intend to damage trigger patterns contained in attack samples to prevent backdoor activation by introducing a preprocessing module before feeding images into DNNs; Model reconstruction based ones aim at removing the hidden backdoor in DNNs by modifying models directly; The fourth type of defenses synthesize potential trigger patterns at first, following by the second stage that the hidden backdoor is eliminated by suppressing their effects; The last type of methods depress the effectiveness of poisoned samples during the training process to prevent the creation of hidden backdoors. In general, our method is most relevant to this type of defenses.
In this paper, we only focus on the last four types of defenses since they directly improve the robustness of DNNs. Besides, there were also few works focusing on certified backdoor defenses (Wang et al., 2020a; Weber et al., 2020). Their robustness is theoretically guaranteed under certain assumptions, which cause these methods to be generally weaker than empirical ones in practice.
2.3 SEMI-SUPERVISED AND SELF-SUPERVISED LEARNING
Semi-supervised Learning. In many real-world applications, the acquisition of labeled data often relies on manual labeling, which is very expensive. In contrast, obtaining unlabeled samples is much easier. To utilize the power of unlabeled samples with labeled ones simultaneously, a great amount of semi-supervised learning methods were proposed (Gao et al., 2017; Berthelot et al., 2019; Van Engelen & Hoos, 2020). Recently, semi-supervised learning was also introduced in improving the security of DNNs (Stanforth et al., 2019; Carmon et al., 2019), where they utilized unlabelled samples in the adversarial training. Most recently, (Yan et al., 2021) discussed how to backdoor semi-supervised learning. However, this approach needs to control other training components (e.g., training loss) in addition to modifying training samples and therefore is out-of-scope of this paper. How to adopt semi-supervised learning for backdoor defense remains blank.
Self-supervised Learning. This learning paradigm is a subset of unsupervised learning, where DNNs are trained with supervised signals generated from the data itself (Chen et al., 2020a; Grill et al., 2020; Liu et al., 2021). It has been adopted for increasing adversarial robustness (Hendrycks et al., 2019; Wu et al., 2021; Shi et al., 2021). Most recently, there were also a few works (Saha et al., 2021; Carlini & Terzis, 2021; Jia et al., 2021) exploring how to backdoor self-supervised learning. However, these attacks are out-of-scope of this paper since they need to control other training components (e.g., training loss) in addition to modifying training samples.
3 REVISITING BACKDOOR ATTACKS FROM THE HIDDEN FEATURE SPACE
In this section, we analyze the behavior of poisoned samples from the hidden feature space of attacked models and discuss its inherent mechanism.
Settings. We conduct the BadNets (Gu et al., 2019) and label-consistent attack (Turner et al., 2019) on CIFAR-10 dataset (Krizhevsky, 2009) for the discussion. They are representative of poison-label attacks and clean-label attacks, respectively. Specifically, we conduct supervised learning on the poisoned datasets with the standard training process and self-supervised learning on the unlabelled
poisoned datasets with SimCLR (Chen et al., 2020a). We visualize poisoned samples in the hidden feature space generated by attacked DNNs based on the t-SNE (Van der Maaten & Hinton, 2008). More detailed settings are presented in Appendix A.
Results. As shown in Figure 1(a)-1(b), poisoned samples (denoted by ‘black-cross’) tend to cluster together to form a separate cluster after the standard supervised training process, no matter under the poison-label attack or clean-label attack. This phenomenon implies why existing poisoning-based backdoor attacks can succeed. Specifically, the excessive learning capability allows DNNs to learn features about the backdoor trigger. Associated with the end-to-end supervised training paradigm, DNNs can shrink the distance between poisoned samples in the feature space and connect the learned trigger-related features with the target label. In contrast, as shown in Figure 1(c)-1(d), poisoned samples lie closely to samples with their ground-truth label after the self-supervised training process on the unlabelled poisoned dataset. It indicates that we can prevent the creation of backdoors by self-supervised learning, which will be further introduced in the next section.
4 DECOUPLING-BASED BACKDOOR DEFENSE
4.1 PRELIMINARIES
General Pipeline of Backdoor Attacks. Let D = {(xi, yi)}Ni=1 denotes the benign training set, where xi ∈ X = {0, 1, . . . , 255}C×W×H is the image, yi ∈ Y = {0, 1, . . . ,K} is its label,K is the number of classes, and yt ∈ Y indicates the target label. How to generate the poisoned datasetDp is the cornerstone of backdoor attacks. Specifically,Dp consists of two subsets, including the modified version of a subset of D and remaining benign samples, i.e., Dp = Dm ∪ Db, where Db ⊂ D, γ , |Dm||D| is the poisoning rate, Dm = {(x
′, yt)|x′ = G(x), (x, y) ∈ D\Db}, and G : X → X is an attacker-predefined poisoned image generator. For example, G(x) = (1−λ)⊗x+λ⊗ t, where λ ∈ [0, 1]C×W×H , t ∈ X is the trigger pattern, and ⊗ is the element-wise product in the blended attack (Chen et al., 2017). Once Dp is generated, it will be sent to users who will train DNNs on it. Hidden backdoors will be created after the training process.
Threat Model. In this paper, we focus on defending against poisoning-based backdoor attacks. The attacker can arbitrarily modify the training set whereas cannot change other training components (e.g., model structure and training loss). For our proposed defense, we assume that defenders can fully control the training process. This is the scenario that users adopt third-party collected samples for training. Note that we do not assume that defenders have a local benign dataset, which is often required in many existing defenses (Wang et al., 2019a; Zhao et al., 2020a; Li et al., 2021a).
Defender’s Goals. The defender’s goals are to prevent the trained DNN model from predicting poisoned samples as the target label and to preserve the high accuracy on benign samples.
4.2 OVERVIEW OF THE DEFENSE PIPELINE
In this section, we describe the general pipeline of our defense. As shown in Figure 2, it consists of three main stages, including (1) learning a purified feature extractor via self-supervised learning, (2) filtering high-credible samples via label-noise learning, and (3) semi-supervised fine-tuning.
Specifically, in the first stage, we remove the label of all training samples to form the unlabelled dataset, based on which to train the feature extractor via self-supervised learning. In the second stage, we freeze the learned feature extractor and adopt all training samples to train the remaining fully connected layers via supervised learning. We then filter α% high-credible samples based on the training loss. The smaller the loss, the more credible the sample. After the second stage, the training set will be separated into two disjoint parts, including high-credible samples and lowcredible samples. We use high-credible samples as labeled samples and remove the label of all low-credible samples to fine-tune the whole model via semi-supervised learning. More detailed information about each stage of our method will be further illustrated in following sections.
4.3 LEARNING PURIFIED FEATURE EXTRACTOR VIA SELF-SUPERVISED LEARNING
Let Dt denotes the training set and fw : X → [0, 1]K indicates the DNN with parameter w = [wc,wf ], wherewc andwf indicates the parameters of the backbone and the fully connected layer, respectively. In this stage, we optimizewc based on the unlabeled version of Dt via self-supervised learning, as follows:
w∗c = arg min wc ∑ (x,y)∈Dt L1(x;wc), (1)
where L1(·) indicates the self-supervised loss (e.g., NT-Xent in SimCLR (Chen et al., 2020a)). Through the self-supervised learning, the learned feature extractor (i.e., backbone) will be purified even if the training set contains poisoned samples, as illustrated in Section 3.
4.4 FILTERING HIGH-CREDIBLE SAMPLES VIA LABEL-NOISE LEARNING
Once w∗c is obtained, the user can freeze it and adopt Dt to further optimize remaining wf , i.e.,
w∗f = arg min wf ∑ (x,y)∈Dt L2 ( f[w∗c ,wf ](x), y ) , (2)
where L2(·) indicates the supervised loss (e.g., cross entropy). After the decoupling-based training process (1)-(2), even if the model is (partly) trained on the poisoned dataset, the hidden backdoor cannot be created since the feature extractor is purified. However, this simple strategy suffers from two main problems. Firstly, compared with the one trained via supervised learning, the accuracy of predicting benign samples will have a certain decrease, since the learned feature extractor is frozen in the second stage. Secondly, poisoned samples will serve as ‘outliers’ to further hinder the learning of the second stage when poison-label attacks appear, since those samples lie close to samples with its ground-truth label instead of the target label in the hidden feature space generated by the learned purified feature extractor. These two problems indicate that we should remove poisoned samples and retrain or fine-tune the whole model.
Specifically, we select high-credible samples Dh based on the loss L2(·; [w∗c ,w∗f ]). The highcredible samples are defined as the α% training samples with the smallest loss, where α ∈ [0, 100] is
a hyper-parameter. In particular, we adopt the symmetric cross-entropy (SCE) (Wang et al., 2019b) as L2(·), inspired by the label-noise learning. As shown in Figure 3, compared with the CE loss, the SCE can significantly increase the differences between poisoned samples and benign ones, which further reduces the possibility that high-credible dataset Dh still contains poisoned samples. Note that we do not intend to accurately separate poisoned samples and benign samples. We only want to ensure that the obtained Dh contains as few poisoned samples as possible.
4.5 SEMI-SUPERVISED FINE-TUNING
After the second stage, the third-party training setDt will be separated into two disjoint parts, including the high-credible dataset Dh and the low-credible dataset Dl , Dt\Dh. Let D̂l , {x|(x, y) ∈ Dl} indicates the unlabeled version of low-credible dataset Dl. We fine-tune the whole trained model f[w∗c ,w∗f ](·) with semi-supervised learning as follows:
min w L3(Dh, D̂l;w), (3)
where L3(·) denotes the semi-supervised loss (e.g., the loss in MixMatch (Berthelot et al., 2019)). This process can prevent the side-effects of poisoned samples while utilizing their contained useful information, and encourage the compatibility between the feature extractor and the simple classifier via learning them jointly instead of separately. Please refer to Section 5.3 for more results.
5 EXPERIMENTS
5.1 EXPERIMENTAL SETTINGS
Datasets and DNNs. We evaluate all defenses on two classical benchmark datasets, including CIFAR-10 (Krizhevsky, 2009) and (a subset of) ImageNet (Deng et al., 2009). We adopt the ResNet18 (He et al., 2016) for these tasks. More detailed settings are presented in Appendix B.1. Besides, we also provide the results on (a subset of) VGGFace2 (Cao et al., 2018) in Appendix C.
Attack Baselines. We examine all defense approaches in defending against four representative attacks. Specifically, we select the BadNets (Gu et al., 2019), the backdoor attack with blended strategy (dubbed ‘Blended’) (Chen et al., 2017), WaNet (Nguyen & Tran, 2021), and label-consistent attack with adversarial perturbations (dubbed ‘Label-Consistent’) (Turner et al., 2019) for the evaluation. They are the representative of patch-based visible and invisible poison-label attacks, nonpatch-based poison-label attacks, and clean-label attacks, respectively.
Defense Baselines. We compared our DBD with two defenses having the same defender’s capacities, including the DPSGD (Du et al., 2020) and ShrinkPad (Li et al., 2021b). We also compare with other two approaches with an additional requirement (i.e., having a local benign dataset), including
the neural cleanse with unlearning strategy (dubbed ‘NC’) (Wang et al., 2019a), and neural attention distillation (dubbed ‘NAD’) (Li et al., 2021a). They are the representative of poison suppression based defenses, preprocessing based defenses, trigger synthesis based defenses, and model reconstruction based defenses, respectively. We also provide results of DNNs trained without any defense (dubbed ‘No Defense’) as another important baseline for reference.
Attack Setups. We use a 2 × 2 square as the trigger pattern on CIFAR-10 dataset and the 32 × 32 Apple logo on ImageNet dataset for the BadNets, as suggested in (Gu et al., 2019; Wang et al., 2019a). For Blended, we adopt the ‘Hello Kitty’ pattern on CIFAR-10 and the random noise pattern on ImageNet, based on the suggestions in (Chen et al., 2017), and set the blended ratio λ = 0.1 on all datasets. The trigger pattern adopted in label-consistent attack is the same as the one used in BadNets. For WaNet, we adopt its default settings on CIFAR-10 dataset. However, on ImageNet dataset, we use different settings optimized by grid-search since the original ones fail. An example of poisoned samples generated by different attacks is shown in Figure 4. Besides, we set the poisoning rate γ1 = 2.5% for label-consistent attack (25% of training samples with the target label) and γ2 = 5% for three other attacks. More details are shown in Appendix B.2.
Defense Setups. For our DBD, we adopt SimCLR (Chen et al., 2020a) as the self-supervised method and MixMatch (Berthelot et al., 2019) as the semi-supervised method. More details about SimCLR and MixMatch are in Appendix I. The filtering rate α is the only key hyper-parameter in DBD, which is set to 50% in all cases. We set the shrinking rate to 10% for the ShrinkPad on all datasets, as suggested in (Li et al., 2021b; Zeng et al., 2021b). In particular, DPSGD and NAD are sensitive to their hyper-parameters. We report their best results in each case based on the grid-search (as shown in Appendix D). Besides, we split a 5% random subset of the benign training set as the local benign dataset for NC and NAD. More implementation details are provided in Appendix B.3.
Evaluation Metrics. We adopt the attack success rate (ASR) and benign accuracy (BA) to measure the effectiveness of all methods1. Specifically, let Dtest indicates the (benign) testing set and Cw : X → Y denotes the trained classifier, we have ASR , Pr(x,y)∈Dtest{Cw(G(x)) = yt|y 6= yt} and BA , Pr(x,y)∈Dtest{Cw(x) = y}, where yt is the target label and G(·) is the poisoned image generator. In particular, the lower the ASR and the higher the BA, the better the defense.
5.2 MAIN RESULTS
Comparing DBD with Defenses having the Same Requirements. As shown in Table 1-2, DBD is significantly better than defenses having the same requirements (i.e., DPSGD and ShrinkPad) in defending against all attacks. For example, the benign accuracy of DBD is 20% over while the attack success rate is 5% less than that of DPSGD in all cases. Specifically, the attack success rate of models with DBD is less than 2% in all cases (mostly < 0.5%), which verifies that our method can successfully prevent the creation of hidden backdoors. Moreover, the decreases of benign accuracy are less than 2% when defending against poison-label attacks, compared with models trained without any defense. Our method is even better on relatively larger dataset where all baseline methods become less effective. These results verify the effectiveness of our method.
1Among all defense methods, the one with the best performance is indicated in boldface and the value with underline denotes the second-best result.
Comparing DBD with Defenses having Extra Requirements. We also compare our defense with two other methods (i.e., NC and NAD), which have an additional requirement that defenders have a benign local dataset. As shown in Table 1-2, NC and NAD are better than DPSGD and ShrinkPad, as we expected, since they adopt additional information from the benign local dataset. In particular, although NAD and NC use additional information, our method is still better than them, even when their performances are tuned to the best while our method only uses the default settings. Specifically, the BA of NC is on par with that of our method. However, it is with the sacrifice of ASR. Especially on ImageNet dataset, NC has limited effects in reducing ASR. In contrast, our method reaches the smallest ASR while its BA is either the highest or the second-highest in almost all cases. These results verify the effectiveness of our method again.
Results. As shown in Figure 7, our method can still prevent the creation of hidden backdoors even when the poisoning rate reaches 20%. Besides, DBD also maintains high benign accuracy. In other words, our method is effective in defending attacks with different strengths.
5.3 ABLATION STUDY
There are four key strategies in DBD, including (1) obtaining purified feature extractor, (2) using SCE instead of CE in the second stage, (3) reducing side-effects of low-credible samples, and (4) fine-tuning the whole model via semi-supervised learning. Here we verify their effectiveness.
Settings. We compare the proposed DBD with its four variants, including (1) DBD without SS, (2) SS with CE, (3) SS with SCE, and (4) SS with SCE + Tuning, on the CIFAR-10 dataset. Specifically, in the first variant, we replace the backbone generated by self-supervised learning with the one trained in a supervised fashion and keep other parts unchanged. In the second variant, we freeze the backbone learned via self-supervised learning and train the remaining fully-connected layers with cross-entropy loss on all training samples. The third variant is similar to the second one. The only difference is that it uses symmetric cross-entropy instead of cross-entropy to train fully-connected layers. The last variant is an advanced version of the third one, which further fine-tunes fullyconnected layers on high-credible samples filtered by the third variant.
Results. As shown in Table 3, we can conclude that decoupling the original end-to-end supervised training process is effective in preventing the creation of hidden backdoors, by comparing our DBD with its first variant and the model trained without any defense. Besides, we can also verify the effectiveness of SCE loss on defending against poison-label backdoor attacks by comparing the second and third DBD variants. Moreover, the fourth DBD variant has relatively lower ASR and BA, compared with the third one. This phenomenon is due to the removal of low-credible samples. It indicates that reducing side-effects of low-credible samples while adopting their useful information is important for the defense. We can also verify that fine-tuning the whole model via semi-supervised learning is also useful by comparing the fourth variant and the proposed DBD.
5.4 RESISTANCE TO POTENTIAL ADAPTIVE ATTACKS
In our paper, we adopted the classical defense setting that attackers have no information about the defense. Attackers may design adaptive attacks if they know the existence of our DBD. The most straightforward idea is to manipulate the self-supervised training process so that poisoned samples are still in a new cluster after the self-supervised learning. However, attackers are not allowed to do it based on our threat model about adopting third-party datasets. Despite this, attackers may design adaptive attacks by optimizing the trigger pattern to make poisoned samples still in a new cluster after the self-supervised learning if they can know the model structure used by defenders, as follows:
Problem Formulation. For a K-classification problem, let X ′ = {xi}Mi=1 indicates the benign images selected for poisoning, Xj = {xi} Nj i=1 denotes the benign images with ground-truth label j, and g is a trained backbone. Given an attacker-predefined poisoned image generator G, the adaptive attack aims to optimize a trigger pattern t by minimizing the distance between poisoned images while maximizing the distance between the center of poisoned images and centers of clusters of benign images with different label, i.e.,
min t
1
M ∑ x∈X ′ d (g(G(x; t)), g′))− 1 K K∑ i=1 d (g′, gi) , (4)
where g′ , 1M ∑ x∈X ′ g(G(x; t)), gi , 1 Ni ∑ x∈Xi g(x), and d is a distance metric.
Settings. We adopt the CIFAR-10 dataset and use the `2 norm as the distance metric to conduct the experiment. Specifically, we assume that attackers have the entire benign dataset, based on which they can train a backbone adopted in the first stage of our DBD. We use the Adam optimizer to solve the above optimization problem for 100 epochs with a learning rate of 0.1. The trigger size is set to 32×32, which means the attacker can completely modify the content of poisoned samples, regardless of its original semantic information and the stealthiness of the attack. This setting is to ensure the attack ability, since clustering poisoned samples together is very difficult in self-supervised learning.
Results. The adaptive attack works well when there is no defense (BA=94.96%, ASR=99.70%). However, this attack still fails to attack our DBD (BA=93.21%, ASR=1.02%). In other words, our defense is resistant to this adaptive attack. It is most probably because the trigger optimized based on the backbone is far less effective when the model is retrained since model parameters are changed due to the random initialization and the update of model weights during the training process.
6 CONCLUSION
The mechanism of poisoning-based backdoor attacks is to establish a latent connection between trigger patterns and the target label during the training process. In this paper, we revealed that this connection is learned mostly due to the end-to-end supervised training paradigm. Motivated by this understanding, we proposed a decoupling-based backdoor defense, which first learns the backbone via self-supervised learning and then the remaining fully-connected layers by the classical supervised learning. We also introduced the label-noise learning method to determine high-credible and low-credible samples, based on which we fine-tuned the whole model via semi-supervised learning. Extensive experiments verify that our defense is effective on reducing backdoor threats while preserving high accuracy on predicting benign samples.
ACKNOWLEDGMENTS
Baoyuan Wu is supported in part by the National Natural Science Foundation of China under Grant 62076213, the University Development Fund of the Chinese University of Hong Kong, Shenzhen under Grant 01001810, and the Special Project Fund of Shenzhen Research Institute of Big Data under Grant T00120210003. Zhan Qin is supported in part by the National Natural Science Foundation of China under Grant U20A20178, the National Key Research and Development Program of China under Grant 2020AAA0107705, and the Research Laboratory for Data Security and Privacy, Zhejiang University-Ant Financial Fintech Center. Kui Ren is supported by the National Key Research and Development Program of China under Grant 2020AAA0107705.
ETHICS STATEMENT
DNNs are widely adopted in many mission-critical areas (e.g., face recognition) and therefore their security is of great significance. The vulnerability of DNNs to backdoor attacks raises serious concerns about using third-party training resources. In this paper, we propose a general training pipeline to obtain backdoor-free DNNs, even if the training dataset contains poisoned samples. This work has no ethical issues in general since our method is purely defensive and does not reveal any new vulnerabilities of DNNs. However, we need to mention that our defense can be adopted only when training with untrusted samples, and backdoor attacks could happen in other scenarios. People should not be too optimistic about eliminating backdoor threats.
REPRODUCIBILITY STATEMENT
The detailed descriptions of datasets, models, and training settings are in Appendix A-D. We also describe the computational facilities and cost in Appendix J-K. Codes of our DBD are also opensourced.
A DETAILED SETTINGS FOR REVISITING BACKDOOR ATTACKS
Attack Setups. We conduct the BadNets (Gu et al., 2019) and label-consistent attack (Turner et al., 2019) with the target label yt = 3 on the CIFAR-10 dataset (Krizhevsky, 2009). The trigger patterns are the same as those presented in Section 5.2. In particular, we implement the label-consistent attack with adversarial perturbations, as suggested in its original paper (Turner et al., 2019). Specifically, we used the projected gradient descent (PGD) (Madry et al., 2018) to generate adversarial perturbations within the `∞-ball where the maximum perturbation size = 16.
Training Setups. We conduct supervised learning on the poisoned datasets with the standard training process and the self-supervised learning on the unlabelled poisoned datasets with the SimCLR (Chen et al., 2020a). The supervised training is conducted based on the open-source code2. Specifically, we use the SGD optimizer with momentum 0.9, weight decay of 5 × 10−4, and an initial learning rate of 0.1. The batch size is set to 128 and we train the ResNet-18 model 200 epochs. The learning rate is decreased by a factor of 10 at epoch 100 and 150, respectively. Besides, we add triggers before performing the data augmentation (e.g., random crop and horizontal flipping). For the self-supervised training, we use the stochastic gradient descent (SGD) optimizer with a momentum of 0.9, an initial learning rate of 0.4, and a weight decay factor of 5 × 10−4. We use a batch size of 512, and train the backbone for 1,000 epochs. We decay the learning rate with the cosine decay schedule (Loshchilov & Hutter, 2016) without a restart. Besides, we also adopt strong data augmentation techniques, including random crop and resize (with random flip), color distortions, and Gaussian blur, as suggested in (Chen et al., 2020a). All models are trained until converge.
t-SNE Visualization Settings. We treat the output of the last residual unit as the feature representation and use the tsne-cuda library (Chan et al., 2019) to get the feature embedding of all samples. To have a better visualization, we adopt all poisoned samples and randomly select 10% benign samples for visualizing models under the supervised learning, and adopt 30% poisoned samples and 10% benign samples for those under the self-supervised learning.
B DETAILED SETTINGS FOR MAIN EXPERIMENTS
B.1 MORE DETAILS ABOUT DATASETS AND DNNS
Due to the limitations of computational resources and time, we adopt a subset randomly selected from the original ImageNet. More detailed information about the datasets and DNNs adopted in the main experiments of our paper is presented in Table 4.
B.2 MORE DETAILS ABOUT ATTACK SETTINGS
Attack Setups. We conduct the BadNets (Gu et al., 2019), blended attack (dubbed ‘Blended’) (Chen et al., 2017), label-consistent attack (dubbed ‘Label-Consistent’) (Turner et al., 2019), and WaNet (Nguyen & Tran, 2021) with the target label yt = 3 on all datasets. The trigger patterns are the same as those presented in Section 5.2. In particular, we set the blended ratio λ = 0.1 for the blended attack on all datasets and examine label-consistent attack with the maximum perturbation size ∈ {16, 32}. Besides, WaNet assumed that attackers can fully control the whole training process in its original paper. However, we found that WaNet only modified training data while other training components (e.g., training loss, training schedule, and model structure) are the same as those used in the standard training process. As such, we re-implement its code in the poisoningbased attack scenario based on its official code3. Specifically, following the settings in its original paper, we set the noise rate ρn = 0.2, control grid size k = 4, and warping strength s = 0.5 on
2https://github.com/kuangliu/pytorch-cifar 3https://github.com/VinAIResearch/Warping-based_Backdoor_Attack-release
the CIFAR-10 dataset. However, we found that the default k and s are too small to make the attack works on the ImageNet dataset (as shown in Table 5-6). Besides, the ‘noise mode’ also significantly reduces the attack effectiveness (as shown in Table 7). As such, we set k = 224 and s = 1 and train models without the noise mode on the ImageNet dataset.
Training Setups. On the CIFAR-10 dataset (Krizhevsky, 2009), the settings are the same as those described in Section A; On the ImageNet dataset (Deng et al., 2009), we conduct experiments based on the open-source code4. Specifically, we use the SGD optimizer with momentum 0.9, weight decay of 10−4, and an initial learning rate of 0.1. The batch size is set to 256 and we train the ResNet-18 model 90 epochs. The learning rate is decreased by a factor of 10 at epoch 30 and 60, respectively. Besides, since the raw images in the ImageNet dataset are of different sizes, we resize them to 3× 224× 224 before adding triggers.
B.3 MORE DETAILS ABOUT DEFENSE SETTINGS
Settings for NC. We conduct reverse engineering and anomaly detection based on its open-source code5. We implement the ‘unlearning’ method to patch attacked models, as suggested in its paper (Wang et al., 2019a). We randomly select 5% benign training samples as the local benign dataset, which is used in the ‘unlearning’ process. Unless otherwise specified, other settings are the same as those used in (Wang et al., 2019a).
Settings for NAD. We implement this method based on its open-source code6. The origin NAD only conducted experiments on the WideResNet model. In our paper, we calculate the NAD loss over the last residual group for the ResNet-18. The local benign dataset is the same as the one adopted in NC, which is used in the fine-tuning and distillation process of NAD. Unless otherwise specified, other settings are the same as those used in (Li et al., 2021a).
Settings for DPSGD. The original DPSGD was conducted on the MNIST dataset implemented by the TensorFlow Framework. In this paper, we re-implement it based on the differentially private SGD method provided by the Opacus7. Specifically, we replace the original SGD optimizer with the differentially private one, as suggested in (Du et al., 2020). There are two important hyperparameters in DPSGD, including noise scales σ and the clipping bound C. In the experiments, we set C = 1 and select the best σ by the grid-search.
4https://github.com/pytorch/examples/tree/master/imagenet 5https://github.com/bolunwang/backdoor 6https://github.com/bboylyg/NAD 7https://github.com/pytorch/opacus
Settings for ShrinkPad. We set the shrinking rate to 10% on all datasets, as suggested in (Li et al., 2021b; Zeng et al., 2021b). Following their settings, we pad 0-pixels at the bottom right of the shrunk image to expand it to its original size.
Settings for our Defense. In this first stage, We adopt SimCLR (Chen et al., 2020a) to perform self-supervised learning. We train backbones 100 instead of 1,000 epochs to reduce computational costs while preserving effectiveness. Other settings are the same as those described in Section A. We use the same settings across all datasets, models, and attacks; In the second stage, we use the Adam optimizer with a learning rate of 0.002 and set the batch size to 128. We train the fully connected layers 10 epochs with the SCE loss (Wang et al., 2019b). Two hyper-parameters involved in the SCE (i.e., α and β in the original paper) are set to 0.1 and 1, respectively. After that, we filter 50% high-credible samples. We use the same settings across all datasets, models, and attacks; In the third stage, we adopt the MixMatch (Berthelot et al., 2019) for semi-supervised fine-tuning with settings suggested in its original paper. Specifically, we use the Adam optimizer with a learning rate of 0.002, the batch size of 64, and finetune the model 190 epochs on the CIFAR-10 and 80 epochs on the ImageNet dataset, respectively. We set the temperature T = 0.5 and the weight of unsupervised loss λu = 15 on the CIFAR-10 and λu = 6 on the ImageNet dataset, respectively. Moreover, we re-filter high-credible samples after every epoch of the third stage based on the SCE loss.
C DEFENDING AGAINST ATTACKS ON VGGFACE2 DATASET
Dataset and DNN. Due to the limitations of computational resources and time, we adopt a subset randomly selected from the original VGGFace2 (Cao et al., 2018). More details are in Table 8.
Settings for Attacks. For the training of models on the VGGFace2 dataset, the batch size is set to 32 and we conduct experiments on the DenseNet-121 model (Huang et al., 2017). An example of poisoned samples generated by different attacks are in Figure 5. Other settings are the same as those used on the ImageNet dataset.
Settings for Defenses. For NAD, we calculate the NAD loss over the second to last layer for the DenseNet-121. Other settings are the same as those used on the ImageNet dataset.
Results. As shown in Table 9, our defense still reaches the best performance even compared with NC and NAD. Specifically, the BA of NC is on par with that of our method whereas it is with the sacrifice of ASR. These results verify the effectiveness of our defense again.
D SEARCHING BEST RESULTS FOR DPSGD AND NAD
The effectiveness of DPSGD and NAD is sensitive to their hyper-parameters. Here we search for their best results based on the criteria that ‘BA − ASR’ reaches the highest value after the defense.
D.1 SEARCHING BEST RESULTS FOR DPSGD
In general, the larger the σ, the smaller the ASR while also the smaller the BA. The results of DPSGD are shown in Table 10-12, where the best results are marked in boldface.
D.2 SEARCHING BEST RESULTS FOR NAD
We found that the fine-tuning stage of NAD is sensitive to the learning rate. We search the best initial learning rate from {0.1, 0.01, 0.001}. As shown in Table 13-15, a very large learning rate significantly reduces the BA, while a very small learning rate can not reduce the ASR effectively. To keep a relatively large BA while maintaining a small ASR, we set η = 0.01 in the fine-tuning stage.
The distillation stage of NAD is also sensitive to its hyper-parameter β. We select the best β via the grid-search. The results are shown in Table 16-19.
E DEFENDING AGAINST LABEL-CONSISTENT ATTACK WITH A SMALLER POISONING RATE
For the label-consistent attack, except for the 2.5% poisoning rate examined in the main manuscript, 0.6% is also an important setting provided in its original paper (Turner et al., 2019). In this section, we compare different defenses against the label-consistent attack with poisoning rate γ = 0.6%.
As shown in Table 20, when defending against label-consistent attack with a 0.6% poisoning rate, our method is still significantly better than defenses having the same requirements (i.e., DPSGD and ShrinkPad). Even compared with those having the additional requirement (i.e., NC and NAD) under their best settings, our defense is still better or on par with them under the default settings. These results verify the effectiveness of our method again.
F DEFENDING AGAINST ATTACKS WITH DIFFERENT TRIGGER PATTERNS
In this section, we verify whether DBD is still effective when different trigger patterns are adopted.
Settings. For simplicity, we adopt the BadNets on the CIFAR-10 dataset as an example for the discussion. Specifically, we change the location and size of the backdoor trigger while keeping other settings unchanged to evaluate the BA and ASR before and after our defense.
Results. As shown in Table 21, although there are some fluctuations, the ASR is smaller than 2% while the BA is greater than 92% in every cases. In other words, our method is effective in defending against attacks with different trigger patterns.
G DEFENDING AGAINST ATTACKS WITH DYNAMIC TRIGGERS
In this section, we verify whether DBD is still effective when attackers adopt dynamic triggers.
Settings. We compare DBD with MESA (Qiao et al., 2019) in defending the dynamic attack discussed in (Qiao et al., 2019) on the CIFAR-10 dataset as an example for the discussion. This dynamic attack uses a distribution of triggers instead of a fixed trigger.
Results. The BA and ASR of DBD are 92.4% and 0.4%, while those of MESA are 94.8% and 2.4%. However, we find MESA failed in defending against blended attack (for it can not correctly detect the trigger) whereas DBD is still effective. These results verified the effectiveness of our defense.
H DISCUSSIONS
H.1 EFFECTS OF HYPER-PARAMETERS
Settings. Here we analyze the effect of filtering rate α, which is the only key method-related hyperparameter in our DBD. We adopt the results on the CIFAR-10 dataset for discussion. Except for the studied parameter α, other settings are the same as those used in Section 5.2.
30 40 50 60 Filtering Rate (%)
90 92 B A (% )
30 40 50 60 Filtering Rate (%)
0
1
A SR
(% )
BadNets Blended WaNet Label Consistent
Figure 6: The effects of filtering rate. 0 5 10 15 20 Poisoning Rate (%)
91
92
93
B A
(% )
0 5 10 15 20 Poisoning Rate (%)
1
2
3
A SR
(% )
BadNets Blended WaNet
Figure 7: The effects of poisoning rate.
Results. The number of labeled samples used in the third stage increase with the increase of filtering rate α, while the probability that the filtered high-credible dataset contains poisoned samples also increases. As shown in Figure 6, DBD can still maintain relatively high benign accuracy even when the filtering rate α is relatively small (e.g., 30%). It is mostly due to the high-quality of learned purified feature extractor and the semi-supervised fine-tuning process. DBD can also reach a nearly 0% attack success rate in all cases. However, we also have to notice that the high-credible dataset may contain poisoned samples when α is very large, which in turn creates hidden backdoors again during the fine-tuning process. Defenders should specify α based on their specific needs.
H.2 DEFENDING ATTACKS WITH VARIOUS POISONING RATES
Settings. We evaluate our method in defending against attacks with different poisoning rate γ on CIFAR-10 dataset. Except for γ, other settings are the same as those used in Section 5.2.
I MORE DETAILS ABOUT SIMCLR, SCE, AND MIXMATCH
NT-Xent Loss in SimCLR. Given a sample mini-batch containing N different samples, SimCLR first applies two separate data augmentations toward each sample to obtain 2N augmented samples. The loss for a positive pair of sample (i, j) can be defined as:
Li,j = − log exp (sim (zi, zj) /τ)∑2N
k=1 I{k 6= i} · exp (sim (zi, zk) /τ) , (5)
where sim(·, ·) is the cosine similarity, zi is the feature representation of sample i, τ is the temperature parameter, and I{k 6= i} ∈ {0, 1} indicating whether k 6= i. The NT-Xent Loss is computed across all 2N positive pairs in this mini-batch.
SCE. The symmetric cross entropy (SCE) can be defined as: LSCE = H(p, q) +H(q, p), (6)
where H(p, q) is the cross entropy, H(q, p) is the reversed cross entropy, p is the prediction, and q is the one-hot label (of the evaluated sample).
MixMatch Loss. For a batch X of labeled samples and a batch U of unlabeled samples (|X | = |U|), MixMatch produces a guessed label q̄ for each unlabled sample u ∈ U and applies MixUp (Zhang et al., 2018) to obtain the augmented X ′ and U ′ . The loss LX and LU can be defined as:
LX = 1 |X ′| ∑
(x,q)∈X ′ H (px, q) , (7)
where px is the prediction of x, q is its one-hot label, and H(·, ·) is the cross entropy.
LU = 1 K · |U ′| ∑
(u,q̄)∈U ′ ‖pu − q̄‖22 , (8)
where pu is the prediction of u, q̄ is its guessed one-hot label, and K is the number of classes.
By combining LX with LU , the MixMatch loss can be defined as: L = LX + λU · LU , (9)
where λU is a hyper-parameter.
J COMPUTATIONAL FACILITIES
We conduct all experiments on two Ubuntu 18.04 servers having different GPUs. One has four NVIDIA GeForce RTX 2080 Ti GPUs with 11GB memory (dubbed ‘RTX 2080Ti’) and the another has three NVIDIA Tesla V100 GPUs with 32GB memory (dubbed ‘V100’).
Computational Facilities for Attacks. All experiments are conducted with a single RTX 2080 Ti.
Computational Facilities for Defenses. Since we do not use a memory-efficient implementation of DenseNet-121, we conduct DPSGD experiments on the VGGFace2 dataset with a single V100. Other experiments of baseline defenses are conducted with a single RTX 2080 Ti. For our defense, we adopt PyTorch (Paszke et al., 2019) distributed data-parallel and automatic mixed precision training (Micikevicius et al., 2018) with two RTX 2080 Ti for self-supervised learning on the VGGFace2 dataset. Other experiments are conducted with a single RTX 2080 Ti.
K COMPUTATIONAL COST
In this section, we analyze the computational cost of our method stage by stage, compared to standrad supervised learning.
Stage 1. Self-supervised learning is known to have a higher computational cost than standard supervised learning (Chen et al., 2020a; He et al., 2020). In our experiments, SimCLR requires roughly four times the computational cost of standard supervised learning. Since we intend to get a purified instead of well-trained feature extractor, we train the feature extractor (i.e., backbone) lesser epochs than the original SimCLR to reduce the training time. As described in Section B.3, we find 100 epochs is enough to preserve effectiveness.
Stage 2. Since we freeze the backbone and only train the remaining fully connected layers, the computational cost is roughly 60% of standard supervised learning.
Stage 3. Semi-supervised learning is known to have a extra labeling cost compared with standard supervised learning (Gao et al., 2020). In our experiments, MixMatch requires roughly two times the computation cost of standard supervised learning.
We will explore a more computational efficient training method in our future work.
L COMPARING OUR DBD WITH DETECTION-BASED BACKDOOR DEFENSES
In this paper, we do not intend to filter malicious and benign samples accurately, as we mentioned in Section 4.4. However, we notice that the second stage of our DBD can serve as a detection-based backdoor defense for it can filter poisoned samples. In this section, we compare the filtering ability of our DBD (stage 2) with existing detection-based backdoor defenses.
Settings. We compare our DBD with two representative detection-based methods, including, Spectral Signatures (SS) (Tran et al., 2018) and Activation Clustering (AC) (Chen et al., 2019), on the CIFAR-10 dataset. These detection-based methods (e.g., SS and AC) filter malicious samples from the training set and train the model on non-malicious samples. Specifically, we re-implement SS in PyTorch based on its official code8 and adopt the open-source code9 for AC, following the settings in their original paper. In particular, since SS filters 1.5ε malicious samples for each class, where ε is the key hyper-parameter means the upper bound of the number of poisoned training samples, we adopt different ε for a fair comparison.
Results. As shown in Table 22-23, the filtering performance of DBD is on par with that of SS and AC. DBD is even better than those methods when filtering poisoned samples generated by more complicated attacks (i.e., WaNet and Label-Consistent). Besides, we also conduct the standard training on non-malicious samples filtered by SS and AC. As shown in Table 24, the hidden backdoor will still be created in many cases, even though the detection-based defenses are sometimes accurate.
8https://github.com/MadryLab/backdoor_data_poisoning 9https://github.com/ain-soph/trojanzoo/blob/main/trojanvision/defenses/
backdoor/activation_clustering.py
This is mainly because these methods may not able to remove enough poisoned samples while preserving enough benign samples simultaneously, i.e., there is a trade-off between BA and ASR.
M DBD WITH DIFFERENT SELF-SUPERVISED METHODS
In this paper, we believe that the desired feature extractor is mapping visually similar inputs to similar positions in the feature space, such that poisoned samples will be separated into their source classes. This goal is compatible with that of self-supervised learning. We believe that any selfsupervised learning can be adopted in our method. To further verify this point, we replace the adopted SimCLR with other self-supervised methods in our DBD and examine their performance.
Settings. We replace the SimCLR with two other self-supervised methods, including MoCo-V2 (Chen et al., 2020b) and BYOL (Grill et al., 2020), in our DBD. Except for the adopted selfsupervised method, other settings are the same as those used in Section 5.2.
Results. As shown in Table 25, all DBD variants have similar performances. In other words, our DBD is not sensitive to the selection of self-supervised methods.
N DBD WITH DIFFERENT LABEL-NOISE LEARNING METHODS
In the main manuscript, we adopt SCE as the label-noise learning method in our second stage. In this section, we explore whether our DBD is still effective if other label-noise methods are adopted.
Settings. We replace SCE in our DBD with two other label-noise learning methods, including generalized cross entropy (GCE) (Zhang & Sabuncu, 2018) and active passive loss (APL) (Ma et al., 2020). Specifically, we adopt the combination of NCE+RCE in APL and use the default hyperparameters suggested in their original paper. Except for the adopted label-noise learning method, other settings are the same as those used in Section 5.2.
Results. As shown in Table 26, all DBD variants are effective in reducing backdoor threats (i.e., low ASR) while maintaining high benign accuracy. In other words, our DBD is not sensitive to the selection of label-noise learning methods.
O ANALYZING WHY OUR DBD IS EFFECTIVE IN DEFENDING AGAINST LABEL-CONSISTENT ATTACK
In general, the good defense performance of our DBD method against the label-consistent attack (which is one of the clean-label attacks) can be explained from the following aspects:
Firstly, as shown in Figure 1, there is a common observation across different attacks (including both poisoned- and clean-label attacks) that poisoned samples tend to gather together in the feature space learned by the standard supervised learning. The most intuitive idea of our DBD is to prevent such a gathering in the learned feature space, which is implemented by self-supervised learning. As shown in Figure 1(d), the poisoned samples of label-consistent attack are also separated into different areas in the feature space learned by self-supervised learning. This example gives an intuitive explanation about why our DBD can successfully defend against the label-consistent attack.
Furthermore, it is interesting to explore why the poisoned samples in the label-consistent attack are separated under self-supervised learning since all poisoned samples are from the same target class, rather than from different source classes in poisoned-label attacks. For each poisoned sample in this attack, there are two types of features: the trigger and the benign feature with (untargeted) adversarial perturbations. From the perspective of DNNs, benign samples with (untargeted) adversarial perturbations are similar to samples from different source classes, though these samples look similar from the human’s perspective. Thus, it is not surprising that poisoned samples in clean-label attacks can also be separated under self-supervised learning, just like those in poisoned-label attacks. | 1. What is the focus and contribution of the paper regarding backdoor defense?
2. What are the strengths and weaknesses of the proposed decoupling-based backdoor defense (DBD)?
3. Do you have any concerns or suggestions regarding the extra computation cost of the DBD pipeline?
4. How does the author address the issue of the backdoor mitigation's impact on the original model's accuracy?
5. How does the fine-tuning step contribute to the effectiveness of the backdoor mitigation, and how sensitive is lambda in this process?
6. Can the authors provide more explanation for the visualization of poisoned and benign data in the embedding space, particularly regarding the distance between poisoned samples and target labels? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a decoupling-based backdoor defense (DBD) on poisoning-based backdoor attacks where an adversary can modify the dataset only. Specifically, DBD combines a self-supervised feature extractor and a supervised noise-free classifier, with an additional semi-supervised learning fine-tuning step. The core idea is to decouple the feature extractor and the final prediction. The authors evaluate the effectiveness of DBD on three datasets, three backdoor attack models, and four defense baselines.
Review
This paper proposes decoupling-based backdoor defense (DBD) on poisoning-based backdoor attacks. Specifically, the authors first give a broad view of why backdoor works by visualizing the poisoned and benign data in the embedding space. They found that the supervised learning paradigm tends to cause poisoned data close to its target label rather than the ground truth label. To mitigate this, the DBD first uses a self-supervised learning to learn a purified feature extractor. It splits the training dataset into two disjoint parts by comparing the fully connected layers' training loss. Later, it uses the high-credible and low-credible samples to fine-tune the whole model in a semi-supervised manner. Experiments are performed on three datasets, three backdoor attack models, and four defense baselines. The authors also consider multiple potential scenarios in practice, such as trigger patterns, adaptive backdoor attacks.
The idea of this paper is simple but effective, and the authors' efforts in the experiments are highly appreciated. Details comments are as follows.
The primary concern of this paper is the extra computation cost of the proposed DBD pipeline. Considering training a self-supervised and semi-supervised model costs way more computational resources than a supervised learning model. Would it be possible to use a public pre-trained feature extractor to replace the self-supervised feature extractor? The authors are welcomed to discuss this.
In Section 5.2, the authors list the results on "No defense" to show the backdoor defenses' impact on the original model's accuracy and the effectiveness of the backdoor mitigation. It is not clear how the model works without a defense. Suppose the authors train the original model in a supervised learning manner. In that case, directly using a self-supervised learning paradigm should also be included to illustrate each step's contribution.
In the fine-tuning step, the sensitivity of lambda should also be discussed, since it controls the ratios of unlablelled data.
In Figure 1(a), the poisoned samples' embedding seems far from the target label (label 3) and closer to other un-targeted labels (label 1, 7, and 9). It would be great if the authors could give explanations. |
ICLR | Title
Tensorized Embedding Layers for Efficient Model Compression
Abstract
The embedding layers transforming input words into real vectors are the key components of deep neural networks used in natural language processing. However, when the vocabulary is large, the corresponding weight matrices can be enormous, which precludes their deployment in a limited resource setting. We introduce a novel way of parametrizing embedding layers based on the Tensor Train (TT) decomposition, which allows compressing the model significantly at the cost of a negligible drop or even a slight gain in performance. We evaluate our method on a wide range of benchmarks in natural language processing and analyze the trade-off between performance and compression ratios for a wide range of architectures, from MLPs to LSTMs and Transformers.
1 INTRODUCTION
Deep neural networks (DNNs) typically used in natural language processing (NLP) employ large embeddings layers, which map the input words into continuous representations and usually have the form of lookup tables. Despite such simplicity and, arguably because of it, the resulting models are cumbersome, which may cause problems in training and deploying them in a limited resource setting. Thus, the compression of large neural networks and the development of novel lightweight architectures have become essential problems in NLP research.
One way to reduce the number of parameters in the trained model is to imply a specific structure on its weight matrices (e.g., assume that they are low-rank or can be well approximated by low-rank tensor networks). Such approaches are successful at compressing the pre-trained models, but they do not facilitate the training itself. Furthermore, they usually require an additional fine-tuning stage to recover the performance of the original model.
In this paper, we introduce a new, parameter efficient embedding layer, termed TT–embedding, which can be plugged in into any model and trained end-to-end. The benefits of our compressed TT–layer are twofold. Firstly, instead of storing huge embedding matrix, we store a sequence of much smaller 2-dimensional and 3-dimensional tensors, necessary for reconstructing the required embeddings, which allows compressing the model significantly at the cost of a negligible performance drop. Secondly, the overall number of parameters can be relatively small (and constant) during the whole training stage, which allows to use larger batches or train efficiently in a case of limited resources.
To validate the efficiency of the proposed approach, we have tested it on several popular NLP tasks. In our experiments, we have observed that the standard embeddings can be replaced by TT–embeddings with the compression ratio of 1− 3 orders without any significant drop (and sometimes even with a slight gain) of the metric of interest. Specifically, we report the following compression ratios of the embedding layers: 441 on the IMDB dataset with 0.2% absolute increase in classification accuracy; 15 on the WMT 2014 En–De dataset with 0.3 drop in the BLEU score.
Additionally, we have also evaluated our algorithm on a task of binary classification based on a large number of categorical features. More concretely, we applied TT–embedding to the click through rate (CTR) prediction problem, a crucial task in the field of digital advertising. Neural networks, typically used for solving this problem, while being rather elementary, include a large number of embedding layers of significant size. As a result, a majority of model parameters that represent these layers, may occupy hundreds of gigabytes of space. We show that TT–embedding not only considerably reduces the number of parameters in such models, but also sometimes improves their accuracy.
2 RELATED WORK
In recent years, a large body of research was devoted to compressing and speeding up various components of neural networks used in NLP tasks. Joulin et al. (2016) adapted the framework of product quantization to reduce the number of parameters in linear models used for text classification. See et al. (2016) proposed to compress LSTM-based neural machine translation models with pruning algorithms. Lobacheva et al. (2017) showed that the recurrent models could be significantly sparsified with the help of variational dropout (Kingma et al., 2015). Chen et al. (2018b) proposed more compact K-way D-dimensional discrete encoding scheme to replace the “one-hot” encoding of categorical features, such as words in NLP taks. Very recently, Chen et al. (2018a) and Variani et al. (2018) introduced GroupReduce and WEST, two very efficient compression methods for the embedding and softmax layers, based on structured low-rank matrix approximation. Concurrently, Lam (2018) proposed the quantization algorithm for compressing word vectors and showed the superiority of the obtained embeddings on word similarity, word analogy, and question answering tasks.
Tensor methods have also been already successfully applied to neural networks compression. Novikov et al. (2015) coined the idea of reshaping weights of fully-connected layers into high-dimensional tensors and representing them in Tensor Train (TT) (Oseledets, 2011) format. This approach was later extended to convolutional (Garipov et al., 2016) and recurrent (Yang et al., 2017; Tjandra et al., 2017; Yu et al., 2017) neural networks. Furthermore, Lebedev et al. (2015) showed that convolutional layers could be also compressed with canonical (CP) tensor decomposition (Carroll & Chang, 1970; Harshman, 1970). Finally, Wang et al. (2018) compressed both fully-connected and convolutional layers with Tensor Ring decomposition (Zhao et al., 2016). While all these methods allowed to reduce the number of parameters in the networks dramatically, they mostly capitalized on heavy fullyconnected and convolutional layers (present in AlexNet (Krizhevsky et al., 2012) or VGG (Simonyan & Zisserman, 2014)), which became outdated in the following years. Recently, Ma et al. (2019) succesfully applied Block-Term Tensor Decomposition to the compression of self-attention modules in the Transformer (Vaswani et al., 2017) architecture. In this work, we show the benefits of applying tensor machinery to the compression of embedding layers, which are still widely used in NLP.
3 TENSOR TRAIN EMBEDDING
In this section, we briefly introduce the necessary notation and present the algorithm for training the TT–embedding layer. Hereinafter, by N -way tensor X we mean a multidimensional array:
X ∈ RI1×I2×···×IN .
with entries X (i1, . . . , iN ), such that {0 ≤ ik < Ik}Nk=1.
3.1 MOTIVATION
Since most of the parameters in the NLP models occupy the embedding layers, we can greatly reduce size of the entire model by compressing these layers. Our goal is to replace the standard embedding matrix with a more compact, yet powerful and trainable, representation which would allow us to efficiently map words into vectors.
The simplest approach to compactly represent a matrix of a large size is to use the low–rank matrix factorization, which treats matrix E ∈ RI×J as a product of two matrices E = UV>. Here U ∈ RI×R and V ∈ RJ×R are much “thinner” matrices, and R is the rank hyperparameter. Note that rather than training the model with the standard embedding layer, and then trying to compress the obtained embedding, we can initially seek the embedding matrix in the described low–rank format. Then, for evaluation and training, the individual word embedding E[i, :] can be computed as a product U[i, :]V> which does not require materializing the full matrix E. This approach reduces the number of degrees of freedom in the embedding layer from IJ to (I + J)R.
However, typically, in the NLP tasks the embedding dimension J is much smaller than the vocabulary size I , and obtaining significant compression ratio using low-rank matrix factorization is problematic. In order to preserve the model performance, the rank R cannot be taken very small, and the compression ratio is bounded by IJ(I+J)R ≤ J R , which is close to 1 for usually full-rank embedding matrix (see Figure 1 in Chen et al. (2018b)). To overcome this bound and achieve significant compression
ratio even for matrices of disproportional dimensionalities, we reshape them into multidimensional tensors and apply the Tensor Train decomposition, which allows for more compact representation, where the number of parameters falls down to logarithmic with respect to I .
3.2 TENSOR TRAIN DECOMPOSITION
A tensor X is said to be represented in the Tensor Train (TT) format (Oseledets, 2011) if each element of X can be computed as:
X (i1, i2, . . . , id) = R1∑
r1=1 R2∑ r2=1 · · · RN−1∑ rN−1=1 G(1)(i1, r1)G(2)(r1, i2, r2) . . .G(N)(rN−1, iN ),
where the tensors G(k) ∈ RRk−1×Ik×Rk are the so-called TT–cores and R0 = RN = 1 by definition. The minimal values of {Rk}N−1k=1 for which the TT–decomposition exists are called TT–ranks. Note, that the element X (i1, i2 . . . iN ) is just effectively the product of 2 vectors and N − 2 matrices:
X (i1, . . . , iN ) = G(1)[i1, :]︸ ︷︷ ︸ 1×R1 G(2)[:, i2, :]︸ ︷︷ ︸ R1×R2 . . .G(N−1)[:, iN−1, :]︸ ︷︷ ︸ RN−2×RN−1 G(N)[:, iN ]︸ ︷︷ ︸ RN−1×1 ,
where G(k)[:, ik, :] stands for the slice (a subset of a tensor with some indices fixed) of the corresponding TT–core G(k).
The number of degrees of freedom in such a decomposition can be evaluated as ∑N
k=1Rk−1IkRk. Thus, in the case of small ranks, the total number of parameters required to store a tensor in TT– representation is significantly smaller than ∏N k=1 Ik parameters required to store the full tensor of the corresponding size. This observation makes the application of the TT–decomposition appealing in many problems dealing with extremely large tensors.
3.3 TT–MATRIX
Let X ∈ RI×J be a matrix of size I × J . Given two arbitrary factorizations of its dimensions into natural numbers, I = ∏N k=1 Ik and J = ∏N k=1 Jk, we can reshape
1 and transpose this matrix into an N -way tensor X ∈ RI1J1×I2J2×···×INJN and then apply the TT–decomposition to it, resulting in a more compact representation.
More concretely, define the bijections I(i) = (i1, . . . , iN ) and J (j) = (j1, . . . , jN ) that map row and column indices i and j of the matrix X to the N -dimensional vector-indices such that 0 ≤ ik < Ik, 0 ≤ jk < Jk, ∀k = 1, . . . , N . From the matrix X we can form an N -way tensor X whose k-th dimension is of length IkJk and is indexed by the tuple (ik, jk). This tensor is then represented in the TT–format:
X ((i1, j1) . . . (iN , jN )) = G(1)[(i1, j1), :] . . .G(N)[:, (iN , jN )]. (1) Such representation of the matrix in the TT–format is called TT–matrix (Oseledets, 2010; Novikov et al., 2015) and is also known as Matrix Product Operator (Pirvu et al., 2010) in physics literature. The factorizations (I1, I2, . . . IN )× (J1, J2, . . . JN ) will be referred to as the shape of TT–matrix, or TT–shapes. The process of constructing the TT–matrix from the standard matrix is visualized in Figure 1 for the tensor of order 3. Note, that in this case the TT–cores are in fact 4-th order tensors, but all the operations defined for tensors in the TT–format are naturally extended to TT–matrices.
3.4 TT–EMBEDDING
By TT–embedding, we call a layer with trainable parameters (TT–cores) represented as a TT–matrix E of the underlying tensor shape (I1, I2, . . . IN )× (J1, J2, . . . JN ) , which can be transformed into a valid embedding layer E ∈ RI×J , with I = ∏N k=1 Ik and J = ∏N k=1 Jk. To specify the shapes of TT–cores one has also to provide the TT–ranks, which are treated as hyperparameters of the layer and explicitly define the total compression ratio.
1by reshape we mean a column-major reshape command such as numpy.reshape in Python.
In order to compute the embedding for a particular word indexed i in the vocabulary, we first map the row index i into the N -dimensional vector index (i1, . . . , iN ), and then calculate components of the embedding with formula (1). Note, that the computation of all its components is equivalent to selecting the particular slices in TT-cores (slices of shapes J1 × R1 in G(1), R1 × J2 × R2 in G(2) and so on) and performing a sequence of matrix multiplications, which is executed efficiently in modern linear algebra packages, such as BLAS. Pseudocode for the procedure of computing the mapping i→ (i1, . . . , iN ) is given in Appendix A. In order to construct TT–embedding layer for a vocabulary of size I and embedding dimension J , and to train a model with such a layer, one has to perform the following steps.
• Provide factorizations of I and J into factors I = I1 × I2 × · · · × IN and J = J1 × J2 × · · · × JN , and specify the set of TT–ranks {R1, R2, . . . , RN−1}.
• Initialize the set of parameters of the embedding Θ = {G(k) ∈ RRk−1×Ik×Jk×Rk}Nk=1. Concrete initialization scenarios are discussed further in the text.
• During training, given a batch of indices {i1, i2, . . . ib}, compute the corresponding embeddings {e1, e2, . . . , eb} using Eq. (1) and Algorithm 1.
• Computed embeddings can be followed by any standard layer such as LSTM (Hochreiter & Schmidhuber, 1997) or self-attention (Vaswani et al., 2017), and trained with backpropagation since they differentially depend on the parameters Θ.
TT–embedding implies a specific structure on the order of tokens in the vocabulary (the order of rows in the embedding matrix), and determining the optimal order is an appealing problem to solve. However, we leave this problem for future work and use the order produced by the standard tokenizer (sorted by frequency) in our current experiments.
We also experimented with a more general form of TT-decomposition, namely Tensor Ring decomposition (Zhao et al., 2016; Wang et al., 2018). This decomposition by construction has the appealing property of being circular permutation invariant (and, thus, more robust with respect to the order of the tokens), which could have potentially provided an improvement over the TT-based models with simple frequency based ordering. Our experiments with TR decomposition on Transformer for NMT can be found in Appendix B.
Initialization The standard way to initialize an embedding matrix E ∈ RI×J is via, e.g., Glorot initializer (Glorot & Bengio, 2010), which initializes each element as E(i, j) ∼ N ( 0, 2I+J ) . For the TT–embedding, we can only initialize the TT–cores, and the distribution of the elements of the resulting matrix E is rather non–trivial. However, it is easy to verify that if we initialize each TT–core element as G(k)(rk−1, ik, rk) ∼ N (0, 1), the resulting distribution of the matrix elements E(i, j) has the property that E[E(i, j)] = 0 and Var[E(i, j)] = ∏N k=1Rk = R
2. Capitalizing on this observation, in order to obtain the desired variance Var[E(i, j)] = σ2 while keeping E[E(i, j)] = 0, we can simply initialize each TT–core as
G(k)(rk−1, ik, rk) ∼ N ( 0, ( σ R )2/N) . (2)
The resulting distribution is not Gaussian, however, it approaches the Gaussian distribution with the increase of the TT–rank (Figure 2).
In our experiments, we have used the modified Glorot initializer implemented by formula (2), which greatly improved performance, as opposed to initializing TT–cores simply via a standard normal distribution. It is also possible to initialize TT–embedding layer by converting the learned embedding matrix into TT–format using the TT–SVD algorithm (Oseledets, 2011), however, this approach requires the pretrained embedding matrix and does not exhibit better performance in practice.
Hyperparameter selection Our embedding layer introduces two additional structure-specific hyperparameters, namely TT–shapes and TT–ranks.
TT–embedding does not require the vocabulary size I to be represented exactly as the product of factors I1, . . . , IN , in fact, any factorization ∏k k=1 Ik = Ĩ ≥ I will suffice. However, in order to achieve the highest possible compression ratio for a fixed value of Ĩ , the factors {Ik}Nk=1 should be as close to each other as possible. Our implementation includes a simple automated procedure for selecting a good values of {Ik}Nk=1 during TT–embedding initialization. The factors J1, . . . , JN are defined by the embedding dimensionality J which can be easily chosen to support good factorization, e.g., 512 = 8× 8× 8 or 480 = 6× 5× 4× 4. The values of TT–ranks directly define the compression ratio, so choosing them to be too small or too large will result into either significant performance drop or little reduction of the number of parameters. In our experiments, we set all TT–ranks to 16 for the problems with small vocabularies and 64 − 192 for the problems with larger vocabularies, which allowed us to achieve significant compression of the embedding layer, at the cost of a tiny sacrifice in the metrics of interest.
4 EXPERIMENTS
Code We have implemented TT–embeddings described in Section 3 in Python using PyTorch (Paszke et al., 2017). The code is available at the anonymous repository https://github.com/ttembedding/tt-embeddings.
Experimental setup We tested our approach on several popular NLP tasks:
• Sentiment analysis — as a starting point in our experiments, we test TT–embeddings on a rather simple task of predicting polarity of a sentence.
• Neural Machine Translation (NMT) — to verify the applicability of TT–embeddings in more practical problems, we test it on a more challenging task of machine translation.
• Language Modeling (LM) — then, we evaluate TT–embeddings on language modeling tasks in the case of extremely large vocabularies.
• Click Through Rate (CTR) prediction — finally, we show that TT–embeddings can be applied for the binary classification with categorical features of significant cardinality.
To prove the generality and wide applicability of the proposed approach, we tested it on various architectures, such as MLPs (CTR), LSTMs (sentiment analysis), and Transformers (NMT, LM).
Note that Transformers in LM and NMT use the same weight matrix for their embedding and softmax layers (Press & Wolf, 2016; Inan et al., 2016) which already significantly reduces model size. Untying weights and tensorizing the embedding layer only will lead to the increase in the number of parameters instead of compression. In our experiments, we use two separate TT-decompositions of the same shape for embedding and softmax layers and report the compression ratios as |V |×dmodel2×TT-params .
4.1 SENTIMENT ANALYSIS
For this experiment, we have used the IMDB dataset (Maas et al., 2011) with two categories, and the Stanford Sentiment Treebank (SST) (Socher et al., 2013) with five categories. We have taken the most frequent 25000 words for the IMDB dataset and 17200 for SST, embedded them into a J–dimensional space using either standard embedding or TT–embedding layer, and performed classification using a standard bidirectional two–layer LSTM with hidden size h = 128, and dropout rate Pdrop = 0.5.
Our findings are summarized in Table 1. We observe that the models with largely compressed embedding layers can perform equally or even better than the full uncompressed models. This suggests that learning individual independent embeddings for each particular word is superfluous, as the expressive power of LSTM is sufficient to make use of these intertwined, yet more compact embeddings. Moreover, slightly better test accuracy of the compressed models in certain cases (e.g., for the SST dataset of a rather small size) insinuates that imposing specific tensorial low–rank structure on the embedding matrix can be viewed as a special form of regularization, thus potentially improving model generalization. A detailed and comprehensive test of this hypothesis goes beyond the scope of this paper, and we leave it for future work.
4.2 NEURAL MACHINE TRANSLATION
For this experiment, we have trained the Transformer-big model (dmodel = 1024, dff = 4096, h = 16) from (Vaswani et al., 2017) on WMT 2014 English–German dataset consisting of roughly 4.5 million sentence pairs. We evaluated on newstest2014 dataset using beam search with a beam size of 4 and no length penalty. We did not employ checkpoint averaging and used the last checkpoint to compute the BLEU score. Sentences were tokenized with YouTokenToMe2 byte-pair-encodings, resulting in a joint vocabulary of 32768 tokens. For the full list of hyperparameters, see Appendix C.
Our results are summarized in Table 2. We observe that even in this rather challenging task, both embedding and softmax layers can be compressed significantly, at the cost of a small drop in the
2https://github.com/VKCOM/YouTokenToMe
BLEU score. However, with the increase of compression factor, the performance deteriorates rapidly. Compared to the sentiment analysis, NMT is a much more complex task which benefits more from additional capacity (in the form of more powerful RNN or more transformer blocks) rather than regularization (Bahdanau et al., 2014; Vaswani et al., 2017; Wu et al., 2019), which may explain why we did not manage to improve the model by regularizing its embedding layers.
TT-embeddings induce 8% training iteration time overhead if compared to the baseline Transformerbig due to our current implementation heavy relying on slow torch.einsum function while standard embedding and softmax layers make use of fast and highly-optimized Tensor Cores for mixed-precision training. We expect a dedicated CUDA kernel to be much more efficient.
4.3 LANGUAGE MODELING
We took the Transformer-XL (Dai et al., 2019), an open source3 state-of-the-art language modeling architecture at the time of this writing, and replaced its embedding and softmax layers with TT– factorizations. Then, we tested different model configurations on the WikiText–103 (Merity et al., 2016) dataset and reported the results in Table 3. For the full list of hyperparameters, see Appendix C.
Compared to sentiment analysis and NMT, we were not able to achieve that high compression ratios for embedding and softmax layers in LM. However, even moderate 3.8 times compression allowed us to save 100M of weights at the cost of ∼ 1.5 perplexity drop.
4.4 CLICK THROUGH RATE PREDICTION
Among other applications of the TT–embedding layer, we chose to focus on CTR prediction, a popular task in digital advertising (He et al., 2014). We consider open dataset provided by Criteo for Kaggle Display Advertising Challenge (Criteo Labs, 2014) which consists of 39 categorical features, 45.8M samples and is binary labeled according to whether the user clicked on the given advertisement. Unique values of categorical features are bijectively mapped into integers. To reduce the memory footprint, if the size of a corresponding vocabulary is immense (e.g., a cardinality of some features in this dataset is of order 106), these integers are further hashed by taking modulus with respect to some fixed number such as 105. However, due to strong compression properties of TT–embeddings, this is not necessary for our approach, and we consider both full and hashed datasets in our experiments.
3https://github.com/kimiyoung/transformer-xl
CTR with the baseline algorithm The task at hand can be treated as a binary classification problem. As a baseline algorithm, we consider the neural network with the following architecture. First, each of the categorical features is passed through a separate embedding layer with embedding size J . After that, the embedded features are concatenated and passed through 4 fully-connected layers of 1024 neurons and ReLU activation functions. In all experiments, we used Adam optimizer with the learning rate equal to 0.0005. Since many input features have a large number of unique values (e.g., 10131227) and storing the corresponding embedding matrices would be costly, we employ the hashing procedure mentioned earlier.
CTR with TT–embeddings We substitute the embedding layers with the TT–embedding layers. Besides that, we leave the overall structure of the neural network unchanged with the same parameters as in the baseline approach. Table 4 presents the experimental results on the Criteo CTR dataset. To the best of our knowledge, our loss value is very close to the state-of-the-art result (Juan et al., 2016). These experiments indicate that the substitution of large embedding layers with TT–embeddings leads to significant compression ratios (up to 2011 times) with a slight improvement in the test loss, and up to 4200 with a small drop in the test loss. The total size of the compressed model does not exceed 20 Mb, while the baseline model weighs about 160 Mb. The obtained compression ratio suggests that the usage of TT–embedding layers may be beneficial in CTR prediction tasks.
5 DISCUSSION AND FUTURE WORK
We propose a novel embedding layer, the TT–embedding, for compressing huge lookup tables used for encoding categorical features of significant cardinality, such as the index of a token in natural language processing tasks. The proposed approach, based on the TT–decomposition, experimentally proved to be effective, as it heavily decreases the number of training parameters at the cost of a small deterioration in performance. In addition, our method can be easily integrated into any deep learning framework and trained via backpropagation, while capitalizing on reduced memory requirements and increased training batch size.
Our experimental results suggest several appealing directions for future work. First of all, TT– embeddings impose a concrete tensorial low-rank structure on the embedding matrix, which was shown to improve the generalization ability of the networks acting as a regularizer. The properties and conditions of applicability of this regularizer are subject to more rigorous analysis. Secondly, unlike standard embedding, we can introduce non-linearity into TT-cores to improve their expressive power (Khrulkov et al., 2019). Additionally, it is important to understand how the order of tokens in the vocabulary affects the properties of the networks with TT–embedding. We hypothesize that there exists the optimal order of tokens which better exploits the particular structure of TT–embedding and leads to a boost in performance and/or compression ratio. Finally, the idea of applying higher–order tensor decompositions to reduce the number of parameters in neural nets is complementary to more traditional methods such as pruning (Han et al., 2015) and quantization (Hubara et al., 2017; Xu et al., 2018). Thus, it would be interesting to make a thorough comparison of all these methods and investigate whether their combination may lead to even stronger compression.
A MULTIINDEX CONSTRUCTION
Algorithm 1 The algorithm implementing the bijection I(i) as described in Section 3.3. Require: I – vocabulary size, {Ik}Nk=1 – an arbitrary factorization of I , i – index of the target word in vocabulary. Returns: I(i) = (i1, . . . , iN ) – N -dimensional index. Initialize: L = {1, I1, I1I2, . . . , I1I2 . . . IN−1} for k = N to 1 do ik ← floor(i/L[k]) i← i mod L[k]
end for
Algorithm 2 The algorithm implementing the bijection (i1, . . . , iN )→ i, inverse to I(i). Require: I – vocabulary size, {Ik}Nk=1 – an arbitrary factorization of I , (i1, . . . , iN ) – N - dimensional index. Returns: i – index of the target word in vocabulary Initialize: L = {1, I1, I1I2, . . . , I1I2 . . . IN−1} i← 0 for k = 1 to N do i← i+ ik × L[k]
end for
B TENSOR RING EMBEDDING
Tensor Ring (TR) decomposition is a generalization to TT-decomposition where the first and the last cores are 3-dimensional tensors which corresponds to R0 = RN > 1. Formally, a tensor X is said to be represented in the TR format (Zhao et al., 2016) if each element of X can be computed as:
X (i1, i2, . . . , id) = R0∑
r0=1 R1∑ r1=1 · · · RN−1∑ rN−1=1 G(1)(r0, i1, r1)G(2)(r1, i2, r2) . . .G(N)(rN−1, iN , r0).
Similar to TT, we can define TR-matrix (see Figure 3) and corresponding TR-embedding layer.
Table 5 shows the performance of different NMT models with both embedding and softmax layers replaced by either TT or TR factorizations. To achieve the same compression factor as the corresponding TT models, TR models should have smaller ranks which negatively affects their performance. Furthermore, TR is more computationally heavy.
C COMPLETE LIST OF HYPERPARAMETERS | 1. What is the main contribution of the paper regarding low-rank tensor decomposition in NLP?
2. What are the strengths and weaknesses of the proposed approach compared to prior works like Tensor Ring?
3. How does the reviewer assess the experimental results and comparisons made in the paper?
4. Are there any concerns or suggestions regarding the paper's novelty, intuitive approach, and lack of theoretical analysis?
5. Are there any minor issues or typos that the authors could address to improve the paper's quality? | Review | Review
This paper proposes a low-rank tensor decomposition model (Tensor Train-TT [Oseledets et al, 2011]) to parameterize the embedding matrix in Natural Language Processing (NLP). It shows that TT allows for a compression of the network and sometimes even a slight increase of test accuracy. The paper is well written and easy to follow.
I found the idea as a natural consequence of many recent papers proposing tensor decomposition to parameterize deep learning networks. However, I think this is the first time that the concept has been applied to learning an embedding matrix, which is an important problem in the field.
The authors reported several experimental results on different tasks and datasets for NLP such as: Sentiment Analysis, Neural Machine Translation and Language Modeling; and an application to the click through rate prediction problem.
I think the paper is of limited novelty but includes interesting experimental results that helps to better understand the potential and limitations of tensor decompositions in deep learning architectures.
Below, I summarize the issues I found, and I would like the authors to address them in their responses:
Major issues:
- In Page 4 (and Appendix B), the authors show a comparison with Tensor Ring (TR) and conclude that TT marginally outperforms TR in terms of BLEU measure for a fixed number of parameters in both models. I found this comparison incomplete, weak and misleading because of the following reasons:
o TR is a more general model including TT as a particular case when the first and last ranks are equal to one (Zhao et al, 2016). In fact, in this experiment, the authors chose all intermediate ranks set at the same value R with the first/last ranks set to 1 and R for TT and TR, respectively, which is not a fair comparison. Shown results suggest that first/last ranks contain less information than intermediate ranks, but it would be necessary to explore other combinations of rank values without keeping them constant to explore the generalization power of the TR model including TT as a particular case.
o The authors compares TT and TR only in case of the NMT, Transformer-big on WMT‘14 English-to-German dataset, where the results are not good for TT and TR. It is noted that baseline model (Big) attains a Sacre BLEU = 28.84, TT1 = 28.53 and TR1 = 28.07 and the compression rate is only 210/179=1.17 for TT1 and TR1 and the Iteration time is larger than in the baseline model. In this case, there is no a clear advantage of using TT or TR.
In my opinion, to improve the paper, I think the authors could:
o To avoid the sentence “In our experiments, however, the resulting TR embeddings performed slightly worse than TT–embeddings with the same number of parameters” unless more conclusive and exhaustive experiments are performed comparing TT and TR.
o To add some comparison results between TT and TR for the rest of datasets such as Sentiment Analysis, Language Modeling and the click through rate prediction problem.
o Highlight that TT is a particular case of TR so considering the first and last ranks equal to one reduce the number of parameters but can affect the generalization power of the model.
- The approach of the paper is mostly intuitive. A theoretical result about why the low rank TT is able to catch the useful information of an optimal or suboptimal embedding matrix is missing.
Minor issues:
- In last paragraph of section 3.2: The number of parameters is computed on the 3D tensor cores only. I think the size of the first/last 2D cores should be added. Please revise the equation.
- The pseudocode for the mapping one index to multiple indices is trivial and could be avoided. If it is kept, I think the reverse operation should be also included, i.e. how to map multiple indices i1, …., iN to one index i.
- The discussion and Figure 2 about the Gaussianity of the values in the higher order tensor based on Gaussian core tensors is not relevant. Maybe, the authors should better motivate why it is important to highlight that the distribution tends to a Gaussian density for increasing ranks.
- In page 5, it is mentioned that “factors should be as close to each other as possible” but there is no a justification for it. Could you give some theoretical insight on why it is important to obtain uniform distribution of matrix size?
- Section 4.1, reference to the Stanford sentiment treebank (SST) is missing.
On Nov 16th: I am satisfied with the responses provided by the Authors who made few changes to solve some identified minor issues. Thanks for taking the review report into account. I have raised the rating. |
ICLR | Title
Tensorized Embedding Layers for Efficient Model Compression
Abstract
The embedding layers transforming input words into real vectors are the key components of deep neural networks used in natural language processing. However, when the vocabulary is large, the corresponding weight matrices can be enormous, which precludes their deployment in a limited resource setting. We introduce a novel way of parametrizing embedding layers based on the Tensor Train (TT) decomposition, which allows compressing the model significantly at the cost of a negligible drop or even a slight gain in performance. We evaluate our method on a wide range of benchmarks in natural language processing and analyze the trade-off between performance and compression ratios for a wide range of architectures, from MLPs to LSTMs and Transformers.
1 INTRODUCTION
Deep neural networks (DNNs) typically used in natural language processing (NLP) employ large embeddings layers, which map the input words into continuous representations and usually have the form of lookup tables. Despite such simplicity and, arguably because of it, the resulting models are cumbersome, which may cause problems in training and deploying them in a limited resource setting. Thus, the compression of large neural networks and the development of novel lightweight architectures have become essential problems in NLP research.
One way to reduce the number of parameters in the trained model is to imply a specific structure on its weight matrices (e.g., assume that they are low-rank or can be well approximated by low-rank tensor networks). Such approaches are successful at compressing the pre-trained models, but they do not facilitate the training itself. Furthermore, they usually require an additional fine-tuning stage to recover the performance of the original model.
In this paper, we introduce a new, parameter efficient embedding layer, termed TT–embedding, which can be plugged in into any model and trained end-to-end. The benefits of our compressed TT–layer are twofold. Firstly, instead of storing huge embedding matrix, we store a sequence of much smaller 2-dimensional and 3-dimensional tensors, necessary for reconstructing the required embeddings, which allows compressing the model significantly at the cost of a negligible performance drop. Secondly, the overall number of parameters can be relatively small (and constant) during the whole training stage, which allows to use larger batches or train efficiently in a case of limited resources.
To validate the efficiency of the proposed approach, we have tested it on several popular NLP tasks. In our experiments, we have observed that the standard embeddings can be replaced by TT–embeddings with the compression ratio of 1− 3 orders without any significant drop (and sometimes even with a slight gain) of the metric of interest. Specifically, we report the following compression ratios of the embedding layers: 441 on the IMDB dataset with 0.2% absolute increase in classification accuracy; 15 on the WMT 2014 En–De dataset with 0.3 drop in the BLEU score.
Additionally, we have also evaluated our algorithm on a task of binary classification based on a large number of categorical features. More concretely, we applied TT–embedding to the click through rate (CTR) prediction problem, a crucial task in the field of digital advertising. Neural networks, typically used for solving this problem, while being rather elementary, include a large number of embedding layers of significant size. As a result, a majority of model parameters that represent these layers, may occupy hundreds of gigabytes of space. We show that TT–embedding not only considerably reduces the number of parameters in such models, but also sometimes improves their accuracy.
2 RELATED WORK
In recent years, a large body of research was devoted to compressing and speeding up various components of neural networks used in NLP tasks. Joulin et al. (2016) adapted the framework of product quantization to reduce the number of parameters in linear models used for text classification. See et al. (2016) proposed to compress LSTM-based neural machine translation models with pruning algorithms. Lobacheva et al. (2017) showed that the recurrent models could be significantly sparsified with the help of variational dropout (Kingma et al., 2015). Chen et al. (2018b) proposed more compact K-way D-dimensional discrete encoding scheme to replace the “one-hot” encoding of categorical features, such as words in NLP taks. Very recently, Chen et al. (2018a) and Variani et al. (2018) introduced GroupReduce and WEST, two very efficient compression methods for the embedding and softmax layers, based on structured low-rank matrix approximation. Concurrently, Lam (2018) proposed the quantization algorithm for compressing word vectors and showed the superiority of the obtained embeddings on word similarity, word analogy, and question answering tasks.
Tensor methods have also been already successfully applied to neural networks compression. Novikov et al. (2015) coined the idea of reshaping weights of fully-connected layers into high-dimensional tensors and representing them in Tensor Train (TT) (Oseledets, 2011) format. This approach was later extended to convolutional (Garipov et al., 2016) and recurrent (Yang et al., 2017; Tjandra et al., 2017; Yu et al., 2017) neural networks. Furthermore, Lebedev et al. (2015) showed that convolutional layers could be also compressed with canonical (CP) tensor decomposition (Carroll & Chang, 1970; Harshman, 1970). Finally, Wang et al. (2018) compressed both fully-connected and convolutional layers with Tensor Ring decomposition (Zhao et al., 2016). While all these methods allowed to reduce the number of parameters in the networks dramatically, they mostly capitalized on heavy fullyconnected and convolutional layers (present in AlexNet (Krizhevsky et al., 2012) or VGG (Simonyan & Zisserman, 2014)), which became outdated in the following years. Recently, Ma et al. (2019) succesfully applied Block-Term Tensor Decomposition to the compression of self-attention modules in the Transformer (Vaswani et al., 2017) architecture. In this work, we show the benefits of applying tensor machinery to the compression of embedding layers, which are still widely used in NLP.
3 TENSOR TRAIN EMBEDDING
In this section, we briefly introduce the necessary notation and present the algorithm for training the TT–embedding layer. Hereinafter, by N -way tensor X we mean a multidimensional array:
X ∈ RI1×I2×···×IN .
with entries X (i1, . . . , iN ), such that {0 ≤ ik < Ik}Nk=1.
3.1 MOTIVATION
Since most of the parameters in the NLP models occupy the embedding layers, we can greatly reduce size of the entire model by compressing these layers. Our goal is to replace the standard embedding matrix with a more compact, yet powerful and trainable, representation which would allow us to efficiently map words into vectors.
The simplest approach to compactly represent a matrix of a large size is to use the low–rank matrix factorization, which treats matrix E ∈ RI×J as a product of two matrices E = UV>. Here U ∈ RI×R and V ∈ RJ×R are much “thinner” matrices, and R is the rank hyperparameter. Note that rather than training the model with the standard embedding layer, and then trying to compress the obtained embedding, we can initially seek the embedding matrix in the described low–rank format. Then, for evaluation and training, the individual word embedding E[i, :] can be computed as a product U[i, :]V> which does not require materializing the full matrix E. This approach reduces the number of degrees of freedom in the embedding layer from IJ to (I + J)R.
However, typically, in the NLP tasks the embedding dimension J is much smaller than the vocabulary size I , and obtaining significant compression ratio using low-rank matrix factorization is problematic. In order to preserve the model performance, the rank R cannot be taken very small, and the compression ratio is bounded by IJ(I+J)R ≤ J R , which is close to 1 for usually full-rank embedding matrix (see Figure 1 in Chen et al. (2018b)). To overcome this bound and achieve significant compression
ratio even for matrices of disproportional dimensionalities, we reshape them into multidimensional tensors and apply the Tensor Train decomposition, which allows for more compact representation, where the number of parameters falls down to logarithmic with respect to I .
3.2 TENSOR TRAIN DECOMPOSITION
A tensor X is said to be represented in the Tensor Train (TT) format (Oseledets, 2011) if each element of X can be computed as:
X (i1, i2, . . . , id) = R1∑
r1=1 R2∑ r2=1 · · · RN−1∑ rN−1=1 G(1)(i1, r1)G(2)(r1, i2, r2) . . .G(N)(rN−1, iN ),
where the tensors G(k) ∈ RRk−1×Ik×Rk are the so-called TT–cores and R0 = RN = 1 by definition. The minimal values of {Rk}N−1k=1 for which the TT–decomposition exists are called TT–ranks. Note, that the element X (i1, i2 . . . iN ) is just effectively the product of 2 vectors and N − 2 matrices:
X (i1, . . . , iN ) = G(1)[i1, :]︸ ︷︷ ︸ 1×R1 G(2)[:, i2, :]︸ ︷︷ ︸ R1×R2 . . .G(N−1)[:, iN−1, :]︸ ︷︷ ︸ RN−2×RN−1 G(N)[:, iN ]︸ ︷︷ ︸ RN−1×1 ,
where G(k)[:, ik, :] stands for the slice (a subset of a tensor with some indices fixed) of the corresponding TT–core G(k).
The number of degrees of freedom in such a decomposition can be evaluated as ∑N
k=1Rk−1IkRk. Thus, in the case of small ranks, the total number of parameters required to store a tensor in TT– representation is significantly smaller than ∏N k=1 Ik parameters required to store the full tensor of the corresponding size. This observation makes the application of the TT–decomposition appealing in many problems dealing with extremely large tensors.
3.3 TT–MATRIX
Let X ∈ RI×J be a matrix of size I × J . Given two arbitrary factorizations of its dimensions into natural numbers, I = ∏N k=1 Ik and J = ∏N k=1 Jk, we can reshape
1 and transpose this matrix into an N -way tensor X ∈ RI1J1×I2J2×···×INJN and then apply the TT–decomposition to it, resulting in a more compact representation.
More concretely, define the bijections I(i) = (i1, . . . , iN ) and J (j) = (j1, . . . , jN ) that map row and column indices i and j of the matrix X to the N -dimensional vector-indices such that 0 ≤ ik < Ik, 0 ≤ jk < Jk, ∀k = 1, . . . , N . From the matrix X we can form an N -way tensor X whose k-th dimension is of length IkJk and is indexed by the tuple (ik, jk). This tensor is then represented in the TT–format:
X ((i1, j1) . . . (iN , jN )) = G(1)[(i1, j1), :] . . .G(N)[:, (iN , jN )]. (1) Such representation of the matrix in the TT–format is called TT–matrix (Oseledets, 2010; Novikov et al., 2015) and is also known as Matrix Product Operator (Pirvu et al., 2010) in physics literature. The factorizations (I1, I2, . . . IN )× (J1, J2, . . . JN ) will be referred to as the shape of TT–matrix, or TT–shapes. The process of constructing the TT–matrix from the standard matrix is visualized in Figure 1 for the tensor of order 3. Note, that in this case the TT–cores are in fact 4-th order tensors, but all the operations defined for tensors in the TT–format are naturally extended to TT–matrices.
3.4 TT–EMBEDDING
By TT–embedding, we call a layer with trainable parameters (TT–cores) represented as a TT–matrix E of the underlying tensor shape (I1, I2, . . . IN )× (J1, J2, . . . JN ) , which can be transformed into a valid embedding layer E ∈ RI×J , with I = ∏N k=1 Ik and J = ∏N k=1 Jk. To specify the shapes of TT–cores one has also to provide the TT–ranks, which are treated as hyperparameters of the layer and explicitly define the total compression ratio.
1by reshape we mean a column-major reshape command such as numpy.reshape in Python.
In order to compute the embedding for a particular word indexed i in the vocabulary, we first map the row index i into the N -dimensional vector index (i1, . . . , iN ), and then calculate components of the embedding with formula (1). Note, that the computation of all its components is equivalent to selecting the particular slices in TT-cores (slices of shapes J1 × R1 in G(1), R1 × J2 × R2 in G(2) and so on) and performing a sequence of matrix multiplications, which is executed efficiently in modern linear algebra packages, such as BLAS. Pseudocode for the procedure of computing the mapping i→ (i1, . . . , iN ) is given in Appendix A. In order to construct TT–embedding layer for a vocabulary of size I and embedding dimension J , and to train a model with such a layer, one has to perform the following steps.
• Provide factorizations of I and J into factors I = I1 × I2 × · · · × IN and J = J1 × J2 × · · · × JN , and specify the set of TT–ranks {R1, R2, . . . , RN−1}.
• Initialize the set of parameters of the embedding Θ = {G(k) ∈ RRk−1×Ik×Jk×Rk}Nk=1. Concrete initialization scenarios are discussed further in the text.
• During training, given a batch of indices {i1, i2, . . . ib}, compute the corresponding embeddings {e1, e2, . . . , eb} using Eq. (1) and Algorithm 1.
• Computed embeddings can be followed by any standard layer such as LSTM (Hochreiter & Schmidhuber, 1997) or self-attention (Vaswani et al., 2017), and trained with backpropagation since they differentially depend on the parameters Θ.
TT–embedding implies a specific structure on the order of tokens in the vocabulary (the order of rows in the embedding matrix), and determining the optimal order is an appealing problem to solve. However, we leave this problem for future work and use the order produced by the standard tokenizer (sorted by frequency) in our current experiments.
We also experimented with a more general form of TT-decomposition, namely Tensor Ring decomposition (Zhao et al., 2016; Wang et al., 2018). This decomposition by construction has the appealing property of being circular permutation invariant (and, thus, more robust with respect to the order of the tokens), which could have potentially provided an improvement over the TT-based models with simple frequency based ordering. Our experiments with TR decomposition on Transformer for NMT can be found in Appendix B.
Initialization The standard way to initialize an embedding matrix E ∈ RI×J is via, e.g., Glorot initializer (Glorot & Bengio, 2010), which initializes each element as E(i, j) ∼ N ( 0, 2I+J ) . For the TT–embedding, we can only initialize the TT–cores, and the distribution of the elements of the resulting matrix E is rather non–trivial. However, it is easy to verify that if we initialize each TT–core element as G(k)(rk−1, ik, rk) ∼ N (0, 1), the resulting distribution of the matrix elements E(i, j) has the property that E[E(i, j)] = 0 and Var[E(i, j)] = ∏N k=1Rk = R
2. Capitalizing on this observation, in order to obtain the desired variance Var[E(i, j)] = σ2 while keeping E[E(i, j)] = 0, we can simply initialize each TT–core as
G(k)(rk−1, ik, rk) ∼ N ( 0, ( σ R )2/N) . (2)
The resulting distribution is not Gaussian, however, it approaches the Gaussian distribution with the increase of the TT–rank (Figure 2).
In our experiments, we have used the modified Glorot initializer implemented by formula (2), which greatly improved performance, as opposed to initializing TT–cores simply via a standard normal distribution. It is also possible to initialize TT–embedding layer by converting the learned embedding matrix into TT–format using the TT–SVD algorithm (Oseledets, 2011), however, this approach requires the pretrained embedding matrix and does not exhibit better performance in practice.
Hyperparameter selection Our embedding layer introduces two additional structure-specific hyperparameters, namely TT–shapes and TT–ranks.
TT–embedding does not require the vocabulary size I to be represented exactly as the product of factors I1, . . . , IN , in fact, any factorization ∏k k=1 Ik = Ĩ ≥ I will suffice. However, in order to achieve the highest possible compression ratio for a fixed value of Ĩ , the factors {Ik}Nk=1 should be as close to each other as possible. Our implementation includes a simple automated procedure for selecting a good values of {Ik}Nk=1 during TT–embedding initialization. The factors J1, . . . , JN are defined by the embedding dimensionality J which can be easily chosen to support good factorization, e.g., 512 = 8× 8× 8 or 480 = 6× 5× 4× 4. The values of TT–ranks directly define the compression ratio, so choosing them to be too small or too large will result into either significant performance drop or little reduction of the number of parameters. In our experiments, we set all TT–ranks to 16 for the problems with small vocabularies and 64 − 192 for the problems with larger vocabularies, which allowed us to achieve significant compression of the embedding layer, at the cost of a tiny sacrifice in the metrics of interest.
4 EXPERIMENTS
Code We have implemented TT–embeddings described in Section 3 in Python using PyTorch (Paszke et al., 2017). The code is available at the anonymous repository https://github.com/ttembedding/tt-embeddings.
Experimental setup We tested our approach on several popular NLP tasks:
• Sentiment analysis — as a starting point in our experiments, we test TT–embeddings on a rather simple task of predicting polarity of a sentence.
• Neural Machine Translation (NMT) — to verify the applicability of TT–embeddings in more practical problems, we test it on a more challenging task of machine translation.
• Language Modeling (LM) — then, we evaluate TT–embeddings on language modeling tasks in the case of extremely large vocabularies.
• Click Through Rate (CTR) prediction — finally, we show that TT–embeddings can be applied for the binary classification with categorical features of significant cardinality.
To prove the generality and wide applicability of the proposed approach, we tested it on various architectures, such as MLPs (CTR), LSTMs (sentiment analysis), and Transformers (NMT, LM).
Note that Transformers in LM and NMT use the same weight matrix for their embedding and softmax layers (Press & Wolf, 2016; Inan et al., 2016) which already significantly reduces model size. Untying weights and tensorizing the embedding layer only will lead to the increase in the number of parameters instead of compression. In our experiments, we use two separate TT-decompositions of the same shape for embedding and softmax layers and report the compression ratios as |V |×dmodel2×TT-params .
4.1 SENTIMENT ANALYSIS
For this experiment, we have used the IMDB dataset (Maas et al., 2011) with two categories, and the Stanford Sentiment Treebank (SST) (Socher et al., 2013) with five categories. We have taken the most frequent 25000 words for the IMDB dataset and 17200 for SST, embedded them into a J–dimensional space using either standard embedding or TT–embedding layer, and performed classification using a standard bidirectional two–layer LSTM with hidden size h = 128, and dropout rate Pdrop = 0.5.
Our findings are summarized in Table 1. We observe that the models with largely compressed embedding layers can perform equally or even better than the full uncompressed models. This suggests that learning individual independent embeddings for each particular word is superfluous, as the expressive power of LSTM is sufficient to make use of these intertwined, yet more compact embeddings. Moreover, slightly better test accuracy of the compressed models in certain cases (e.g., for the SST dataset of a rather small size) insinuates that imposing specific tensorial low–rank structure on the embedding matrix can be viewed as a special form of regularization, thus potentially improving model generalization. A detailed and comprehensive test of this hypothesis goes beyond the scope of this paper, and we leave it for future work.
4.2 NEURAL MACHINE TRANSLATION
For this experiment, we have trained the Transformer-big model (dmodel = 1024, dff = 4096, h = 16) from (Vaswani et al., 2017) on WMT 2014 English–German dataset consisting of roughly 4.5 million sentence pairs. We evaluated on newstest2014 dataset using beam search with a beam size of 4 and no length penalty. We did not employ checkpoint averaging and used the last checkpoint to compute the BLEU score. Sentences were tokenized with YouTokenToMe2 byte-pair-encodings, resulting in a joint vocabulary of 32768 tokens. For the full list of hyperparameters, see Appendix C.
Our results are summarized in Table 2. We observe that even in this rather challenging task, both embedding and softmax layers can be compressed significantly, at the cost of a small drop in the
2https://github.com/VKCOM/YouTokenToMe
BLEU score. However, with the increase of compression factor, the performance deteriorates rapidly. Compared to the sentiment analysis, NMT is a much more complex task which benefits more from additional capacity (in the form of more powerful RNN or more transformer blocks) rather than regularization (Bahdanau et al., 2014; Vaswani et al., 2017; Wu et al., 2019), which may explain why we did not manage to improve the model by regularizing its embedding layers.
TT-embeddings induce 8% training iteration time overhead if compared to the baseline Transformerbig due to our current implementation heavy relying on slow torch.einsum function while standard embedding and softmax layers make use of fast and highly-optimized Tensor Cores for mixed-precision training. We expect a dedicated CUDA kernel to be much more efficient.
4.3 LANGUAGE MODELING
We took the Transformer-XL (Dai et al., 2019), an open source3 state-of-the-art language modeling architecture at the time of this writing, and replaced its embedding and softmax layers with TT– factorizations. Then, we tested different model configurations on the WikiText–103 (Merity et al., 2016) dataset and reported the results in Table 3. For the full list of hyperparameters, see Appendix C.
Compared to sentiment analysis and NMT, we were not able to achieve that high compression ratios for embedding and softmax layers in LM. However, even moderate 3.8 times compression allowed us to save 100M of weights at the cost of ∼ 1.5 perplexity drop.
4.4 CLICK THROUGH RATE PREDICTION
Among other applications of the TT–embedding layer, we chose to focus on CTR prediction, a popular task in digital advertising (He et al., 2014). We consider open dataset provided by Criteo for Kaggle Display Advertising Challenge (Criteo Labs, 2014) which consists of 39 categorical features, 45.8M samples and is binary labeled according to whether the user clicked on the given advertisement. Unique values of categorical features are bijectively mapped into integers. To reduce the memory footprint, if the size of a corresponding vocabulary is immense (e.g., a cardinality of some features in this dataset is of order 106), these integers are further hashed by taking modulus with respect to some fixed number such as 105. However, due to strong compression properties of TT–embeddings, this is not necessary for our approach, and we consider both full and hashed datasets in our experiments.
3https://github.com/kimiyoung/transformer-xl
CTR with the baseline algorithm The task at hand can be treated as a binary classification problem. As a baseline algorithm, we consider the neural network with the following architecture. First, each of the categorical features is passed through a separate embedding layer with embedding size J . After that, the embedded features are concatenated and passed through 4 fully-connected layers of 1024 neurons and ReLU activation functions. In all experiments, we used Adam optimizer with the learning rate equal to 0.0005. Since many input features have a large number of unique values (e.g., 10131227) and storing the corresponding embedding matrices would be costly, we employ the hashing procedure mentioned earlier.
CTR with TT–embeddings We substitute the embedding layers with the TT–embedding layers. Besides that, we leave the overall structure of the neural network unchanged with the same parameters as in the baseline approach. Table 4 presents the experimental results on the Criteo CTR dataset. To the best of our knowledge, our loss value is very close to the state-of-the-art result (Juan et al., 2016). These experiments indicate that the substitution of large embedding layers with TT–embeddings leads to significant compression ratios (up to 2011 times) with a slight improvement in the test loss, and up to 4200 with a small drop in the test loss. The total size of the compressed model does not exceed 20 Mb, while the baseline model weighs about 160 Mb. The obtained compression ratio suggests that the usage of TT–embedding layers may be beneficial in CTR prediction tasks.
5 DISCUSSION AND FUTURE WORK
We propose a novel embedding layer, the TT–embedding, for compressing huge lookup tables used for encoding categorical features of significant cardinality, such as the index of a token in natural language processing tasks. The proposed approach, based on the TT–decomposition, experimentally proved to be effective, as it heavily decreases the number of training parameters at the cost of a small deterioration in performance. In addition, our method can be easily integrated into any deep learning framework and trained via backpropagation, while capitalizing on reduced memory requirements and increased training batch size.
Our experimental results suggest several appealing directions for future work. First of all, TT– embeddings impose a concrete tensorial low-rank structure on the embedding matrix, which was shown to improve the generalization ability of the networks acting as a regularizer. The properties and conditions of applicability of this regularizer are subject to more rigorous analysis. Secondly, unlike standard embedding, we can introduce non-linearity into TT-cores to improve their expressive power (Khrulkov et al., 2019). Additionally, it is important to understand how the order of tokens in the vocabulary affects the properties of the networks with TT–embedding. We hypothesize that there exists the optimal order of tokens which better exploits the particular structure of TT–embedding and leads to a boost in performance and/or compression ratio. Finally, the idea of applying higher–order tensor decompositions to reduce the number of parameters in neural nets is complementary to more traditional methods such as pruning (Han et al., 2015) and quantization (Hubara et al., 2017; Xu et al., 2018). Thus, it would be interesting to make a thorough comparison of all these methods and investigate whether their combination may lead to even stronger compression.
A MULTIINDEX CONSTRUCTION
Algorithm 1 The algorithm implementing the bijection I(i) as described in Section 3.3. Require: I – vocabulary size, {Ik}Nk=1 – an arbitrary factorization of I , i – index of the target word in vocabulary. Returns: I(i) = (i1, . . . , iN ) – N -dimensional index. Initialize: L = {1, I1, I1I2, . . . , I1I2 . . . IN−1} for k = N to 1 do ik ← floor(i/L[k]) i← i mod L[k]
end for
Algorithm 2 The algorithm implementing the bijection (i1, . . . , iN )→ i, inverse to I(i). Require: I – vocabulary size, {Ik}Nk=1 – an arbitrary factorization of I , (i1, . . . , iN ) – N - dimensional index. Returns: i – index of the target word in vocabulary Initialize: L = {1, I1, I1I2, . . . , I1I2 . . . IN−1} i← 0 for k = 1 to N do i← i+ ik × L[k]
end for
B TENSOR RING EMBEDDING
Tensor Ring (TR) decomposition is a generalization to TT-decomposition where the first and the last cores are 3-dimensional tensors which corresponds to R0 = RN > 1. Formally, a tensor X is said to be represented in the TR format (Zhao et al., 2016) if each element of X can be computed as:
X (i1, i2, . . . , id) = R0∑
r0=1 R1∑ r1=1 · · · RN−1∑ rN−1=1 G(1)(r0, i1, r1)G(2)(r1, i2, r2) . . .G(N)(rN−1, iN , r0).
Similar to TT, we can define TR-matrix (see Figure 3) and corresponding TR-embedding layer.
Table 5 shows the performance of different NMT models with both embedding and softmax layers replaced by either TT or TR factorizations. To achieve the same compression factor as the corresponding TT models, TR models should have smaller ranks which negatively affects their performance. Furthermore, TR is more computationally heavy.
C COMPLETE LIST OF HYPERPARAMETERS | 1. What is the novel approach introduced by the paper for parametrizing embedding layers?
2. What is the advantage of using the proposed method in terms of model compression and performance?
3. How does the reviewer suggest expanding the experimental comparison to include other methods for compressing embedding layers? | Review | Review
This paper introduces a novel way of parametrizing embedding layers based on the Tensor Train (TT)
decomposition, which allows compressing the model significantly at the cost of a negligible drop or even a slight gain in performance. And this paper focuses on the input embedding layers.
For the experiments, the paper just compared methods using TT layer and normal embedding layer. There are many other methods that has been proposed to compress the embedding layers, it will be good to compare with one or two other methods, such as WEST or compression based on projection layers. |
ICLR | Title
Tensorized Embedding Layers for Efficient Model Compression
Abstract
The embedding layers transforming input words into real vectors are the key components of deep neural networks used in natural language processing. However, when the vocabulary is large, the corresponding weight matrices can be enormous, which precludes their deployment in a limited resource setting. We introduce a novel way of parametrizing embedding layers based on the Tensor Train (TT) decomposition, which allows compressing the model significantly at the cost of a negligible drop or even a slight gain in performance. We evaluate our method on a wide range of benchmarks in natural language processing and analyze the trade-off between performance and compression ratios for a wide range of architectures, from MLPs to LSTMs and Transformers.
1 INTRODUCTION
Deep neural networks (DNNs) typically used in natural language processing (NLP) employ large embeddings layers, which map the input words into continuous representations and usually have the form of lookup tables. Despite such simplicity and, arguably because of it, the resulting models are cumbersome, which may cause problems in training and deploying them in a limited resource setting. Thus, the compression of large neural networks and the development of novel lightweight architectures have become essential problems in NLP research.
One way to reduce the number of parameters in the trained model is to imply a specific structure on its weight matrices (e.g., assume that they are low-rank or can be well approximated by low-rank tensor networks). Such approaches are successful at compressing the pre-trained models, but they do not facilitate the training itself. Furthermore, they usually require an additional fine-tuning stage to recover the performance of the original model.
In this paper, we introduce a new, parameter efficient embedding layer, termed TT–embedding, which can be plugged in into any model and trained end-to-end. The benefits of our compressed TT–layer are twofold. Firstly, instead of storing huge embedding matrix, we store a sequence of much smaller 2-dimensional and 3-dimensional tensors, necessary for reconstructing the required embeddings, which allows compressing the model significantly at the cost of a negligible performance drop. Secondly, the overall number of parameters can be relatively small (and constant) during the whole training stage, which allows to use larger batches or train efficiently in a case of limited resources.
To validate the efficiency of the proposed approach, we have tested it on several popular NLP tasks. In our experiments, we have observed that the standard embeddings can be replaced by TT–embeddings with the compression ratio of 1− 3 orders without any significant drop (and sometimes even with a slight gain) of the metric of interest. Specifically, we report the following compression ratios of the embedding layers: 441 on the IMDB dataset with 0.2% absolute increase in classification accuracy; 15 on the WMT 2014 En–De dataset with 0.3 drop in the BLEU score.
Additionally, we have also evaluated our algorithm on a task of binary classification based on a large number of categorical features. More concretely, we applied TT–embedding to the click through rate (CTR) prediction problem, a crucial task in the field of digital advertising. Neural networks, typically used for solving this problem, while being rather elementary, include a large number of embedding layers of significant size. As a result, a majority of model parameters that represent these layers, may occupy hundreds of gigabytes of space. We show that TT–embedding not only considerably reduces the number of parameters in such models, but also sometimes improves their accuracy.
2 RELATED WORK
In recent years, a large body of research was devoted to compressing and speeding up various components of neural networks used in NLP tasks. Joulin et al. (2016) adapted the framework of product quantization to reduce the number of parameters in linear models used for text classification. See et al. (2016) proposed to compress LSTM-based neural machine translation models with pruning algorithms. Lobacheva et al. (2017) showed that the recurrent models could be significantly sparsified with the help of variational dropout (Kingma et al., 2015). Chen et al. (2018b) proposed more compact K-way D-dimensional discrete encoding scheme to replace the “one-hot” encoding of categorical features, such as words in NLP taks. Very recently, Chen et al. (2018a) and Variani et al. (2018) introduced GroupReduce and WEST, two very efficient compression methods for the embedding and softmax layers, based on structured low-rank matrix approximation. Concurrently, Lam (2018) proposed the quantization algorithm for compressing word vectors and showed the superiority of the obtained embeddings on word similarity, word analogy, and question answering tasks.
Tensor methods have also been already successfully applied to neural networks compression. Novikov et al. (2015) coined the idea of reshaping weights of fully-connected layers into high-dimensional tensors and representing them in Tensor Train (TT) (Oseledets, 2011) format. This approach was later extended to convolutional (Garipov et al., 2016) and recurrent (Yang et al., 2017; Tjandra et al., 2017; Yu et al., 2017) neural networks. Furthermore, Lebedev et al. (2015) showed that convolutional layers could be also compressed with canonical (CP) tensor decomposition (Carroll & Chang, 1970; Harshman, 1970). Finally, Wang et al. (2018) compressed both fully-connected and convolutional layers with Tensor Ring decomposition (Zhao et al., 2016). While all these methods allowed to reduce the number of parameters in the networks dramatically, they mostly capitalized on heavy fullyconnected and convolutional layers (present in AlexNet (Krizhevsky et al., 2012) or VGG (Simonyan & Zisserman, 2014)), which became outdated in the following years. Recently, Ma et al. (2019) succesfully applied Block-Term Tensor Decomposition to the compression of self-attention modules in the Transformer (Vaswani et al., 2017) architecture. In this work, we show the benefits of applying tensor machinery to the compression of embedding layers, which are still widely used in NLP.
3 TENSOR TRAIN EMBEDDING
In this section, we briefly introduce the necessary notation and present the algorithm for training the TT–embedding layer. Hereinafter, by N -way tensor X we mean a multidimensional array:
X ∈ RI1×I2×···×IN .
with entries X (i1, . . . , iN ), such that {0 ≤ ik < Ik}Nk=1.
3.1 MOTIVATION
Since most of the parameters in the NLP models occupy the embedding layers, we can greatly reduce size of the entire model by compressing these layers. Our goal is to replace the standard embedding matrix with a more compact, yet powerful and trainable, representation which would allow us to efficiently map words into vectors.
The simplest approach to compactly represent a matrix of a large size is to use the low–rank matrix factorization, which treats matrix E ∈ RI×J as a product of two matrices E = UV>. Here U ∈ RI×R and V ∈ RJ×R are much “thinner” matrices, and R is the rank hyperparameter. Note that rather than training the model with the standard embedding layer, and then trying to compress the obtained embedding, we can initially seek the embedding matrix in the described low–rank format. Then, for evaluation and training, the individual word embedding E[i, :] can be computed as a product U[i, :]V> which does not require materializing the full matrix E. This approach reduces the number of degrees of freedom in the embedding layer from IJ to (I + J)R.
However, typically, in the NLP tasks the embedding dimension J is much smaller than the vocabulary size I , and obtaining significant compression ratio using low-rank matrix factorization is problematic. In order to preserve the model performance, the rank R cannot be taken very small, and the compression ratio is bounded by IJ(I+J)R ≤ J R , which is close to 1 for usually full-rank embedding matrix (see Figure 1 in Chen et al. (2018b)). To overcome this bound and achieve significant compression
ratio even for matrices of disproportional dimensionalities, we reshape them into multidimensional tensors and apply the Tensor Train decomposition, which allows for more compact representation, where the number of parameters falls down to logarithmic with respect to I .
3.2 TENSOR TRAIN DECOMPOSITION
A tensor X is said to be represented in the Tensor Train (TT) format (Oseledets, 2011) if each element of X can be computed as:
X (i1, i2, . . . , id) = R1∑
r1=1 R2∑ r2=1 · · · RN−1∑ rN−1=1 G(1)(i1, r1)G(2)(r1, i2, r2) . . .G(N)(rN−1, iN ),
where the tensors G(k) ∈ RRk−1×Ik×Rk are the so-called TT–cores and R0 = RN = 1 by definition. The minimal values of {Rk}N−1k=1 for which the TT–decomposition exists are called TT–ranks. Note, that the element X (i1, i2 . . . iN ) is just effectively the product of 2 vectors and N − 2 matrices:
X (i1, . . . , iN ) = G(1)[i1, :]︸ ︷︷ ︸ 1×R1 G(2)[:, i2, :]︸ ︷︷ ︸ R1×R2 . . .G(N−1)[:, iN−1, :]︸ ︷︷ ︸ RN−2×RN−1 G(N)[:, iN ]︸ ︷︷ ︸ RN−1×1 ,
where G(k)[:, ik, :] stands for the slice (a subset of a tensor with some indices fixed) of the corresponding TT–core G(k).
The number of degrees of freedom in such a decomposition can be evaluated as ∑N
k=1Rk−1IkRk. Thus, in the case of small ranks, the total number of parameters required to store a tensor in TT– representation is significantly smaller than ∏N k=1 Ik parameters required to store the full tensor of the corresponding size. This observation makes the application of the TT–decomposition appealing in many problems dealing with extremely large tensors.
3.3 TT–MATRIX
Let X ∈ RI×J be a matrix of size I × J . Given two arbitrary factorizations of its dimensions into natural numbers, I = ∏N k=1 Ik and J = ∏N k=1 Jk, we can reshape
1 and transpose this matrix into an N -way tensor X ∈ RI1J1×I2J2×···×INJN and then apply the TT–decomposition to it, resulting in a more compact representation.
More concretely, define the bijections I(i) = (i1, . . . , iN ) and J (j) = (j1, . . . , jN ) that map row and column indices i and j of the matrix X to the N -dimensional vector-indices such that 0 ≤ ik < Ik, 0 ≤ jk < Jk, ∀k = 1, . . . , N . From the matrix X we can form an N -way tensor X whose k-th dimension is of length IkJk and is indexed by the tuple (ik, jk). This tensor is then represented in the TT–format:
X ((i1, j1) . . . (iN , jN )) = G(1)[(i1, j1), :] . . .G(N)[:, (iN , jN )]. (1) Such representation of the matrix in the TT–format is called TT–matrix (Oseledets, 2010; Novikov et al., 2015) and is also known as Matrix Product Operator (Pirvu et al., 2010) in physics literature. The factorizations (I1, I2, . . . IN )× (J1, J2, . . . JN ) will be referred to as the shape of TT–matrix, or TT–shapes. The process of constructing the TT–matrix from the standard matrix is visualized in Figure 1 for the tensor of order 3. Note, that in this case the TT–cores are in fact 4-th order tensors, but all the operations defined for tensors in the TT–format are naturally extended to TT–matrices.
3.4 TT–EMBEDDING
By TT–embedding, we call a layer with trainable parameters (TT–cores) represented as a TT–matrix E of the underlying tensor shape (I1, I2, . . . IN )× (J1, J2, . . . JN ) , which can be transformed into a valid embedding layer E ∈ RI×J , with I = ∏N k=1 Ik and J = ∏N k=1 Jk. To specify the shapes of TT–cores one has also to provide the TT–ranks, which are treated as hyperparameters of the layer and explicitly define the total compression ratio.
1by reshape we mean a column-major reshape command such as numpy.reshape in Python.
In order to compute the embedding for a particular word indexed i in the vocabulary, we first map the row index i into the N -dimensional vector index (i1, . . . , iN ), and then calculate components of the embedding with formula (1). Note, that the computation of all its components is equivalent to selecting the particular slices in TT-cores (slices of shapes J1 × R1 in G(1), R1 × J2 × R2 in G(2) and so on) and performing a sequence of matrix multiplications, which is executed efficiently in modern linear algebra packages, such as BLAS. Pseudocode for the procedure of computing the mapping i→ (i1, . . . , iN ) is given in Appendix A. In order to construct TT–embedding layer for a vocabulary of size I and embedding dimension J , and to train a model with such a layer, one has to perform the following steps.
• Provide factorizations of I and J into factors I = I1 × I2 × · · · × IN and J = J1 × J2 × · · · × JN , and specify the set of TT–ranks {R1, R2, . . . , RN−1}.
• Initialize the set of parameters of the embedding Θ = {G(k) ∈ RRk−1×Ik×Jk×Rk}Nk=1. Concrete initialization scenarios are discussed further in the text.
• During training, given a batch of indices {i1, i2, . . . ib}, compute the corresponding embeddings {e1, e2, . . . , eb} using Eq. (1) and Algorithm 1.
• Computed embeddings can be followed by any standard layer such as LSTM (Hochreiter & Schmidhuber, 1997) or self-attention (Vaswani et al., 2017), and trained with backpropagation since they differentially depend on the parameters Θ.
TT–embedding implies a specific structure on the order of tokens in the vocabulary (the order of rows in the embedding matrix), and determining the optimal order is an appealing problem to solve. However, we leave this problem for future work and use the order produced by the standard tokenizer (sorted by frequency) in our current experiments.
We also experimented with a more general form of TT-decomposition, namely Tensor Ring decomposition (Zhao et al., 2016; Wang et al., 2018). This decomposition by construction has the appealing property of being circular permutation invariant (and, thus, more robust with respect to the order of the tokens), which could have potentially provided an improvement over the TT-based models with simple frequency based ordering. Our experiments with TR decomposition on Transformer for NMT can be found in Appendix B.
Initialization The standard way to initialize an embedding matrix E ∈ RI×J is via, e.g., Glorot initializer (Glorot & Bengio, 2010), which initializes each element as E(i, j) ∼ N ( 0, 2I+J ) . For the TT–embedding, we can only initialize the TT–cores, and the distribution of the elements of the resulting matrix E is rather non–trivial. However, it is easy to verify that if we initialize each TT–core element as G(k)(rk−1, ik, rk) ∼ N (0, 1), the resulting distribution of the matrix elements E(i, j) has the property that E[E(i, j)] = 0 and Var[E(i, j)] = ∏N k=1Rk = R
2. Capitalizing on this observation, in order to obtain the desired variance Var[E(i, j)] = σ2 while keeping E[E(i, j)] = 0, we can simply initialize each TT–core as
G(k)(rk−1, ik, rk) ∼ N ( 0, ( σ R )2/N) . (2)
The resulting distribution is not Gaussian, however, it approaches the Gaussian distribution with the increase of the TT–rank (Figure 2).
In our experiments, we have used the modified Glorot initializer implemented by formula (2), which greatly improved performance, as opposed to initializing TT–cores simply via a standard normal distribution. It is also possible to initialize TT–embedding layer by converting the learned embedding matrix into TT–format using the TT–SVD algorithm (Oseledets, 2011), however, this approach requires the pretrained embedding matrix and does not exhibit better performance in practice.
Hyperparameter selection Our embedding layer introduces two additional structure-specific hyperparameters, namely TT–shapes and TT–ranks.
TT–embedding does not require the vocabulary size I to be represented exactly as the product of factors I1, . . . , IN , in fact, any factorization ∏k k=1 Ik = Ĩ ≥ I will suffice. However, in order to achieve the highest possible compression ratio for a fixed value of Ĩ , the factors {Ik}Nk=1 should be as close to each other as possible. Our implementation includes a simple automated procedure for selecting a good values of {Ik}Nk=1 during TT–embedding initialization. The factors J1, . . . , JN are defined by the embedding dimensionality J which can be easily chosen to support good factorization, e.g., 512 = 8× 8× 8 or 480 = 6× 5× 4× 4. The values of TT–ranks directly define the compression ratio, so choosing them to be too small or too large will result into either significant performance drop or little reduction of the number of parameters. In our experiments, we set all TT–ranks to 16 for the problems with small vocabularies and 64 − 192 for the problems with larger vocabularies, which allowed us to achieve significant compression of the embedding layer, at the cost of a tiny sacrifice in the metrics of interest.
4 EXPERIMENTS
Code We have implemented TT–embeddings described in Section 3 in Python using PyTorch (Paszke et al., 2017). The code is available at the anonymous repository https://github.com/ttembedding/tt-embeddings.
Experimental setup We tested our approach on several popular NLP tasks:
• Sentiment analysis — as a starting point in our experiments, we test TT–embeddings on a rather simple task of predicting polarity of a sentence.
• Neural Machine Translation (NMT) — to verify the applicability of TT–embeddings in more practical problems, we test it on a more challenging task of machine translation.
• Language Modeling (LM) — then, we evaluate TT–embeddings on language modeling tasks in the case of extremely large vocabularies.
• Click Through Rate (CTR) prediction — finally, we show that TT–embeddings can be applied for the binary classification with categorical features of significant cardinality.
To prove the generality and wide applicability of the proposed approach, we tested it on various architectures, such as MLPs (CTR), LSTMs (sentiment analysis), and Transformers (NMT, LM).
Note that Transformers in LM and NMT use the same weight matrix for their embedding and softmax layers (Press & Wolf, 2016; Inan et al., 2016) which already significantly reduces model size. Untying weights and tensorizing the embedding layer only will lead to the increase in the number of parameters instead of compression. In our experiments, we use two separate TT-decompositions of the same shape for embedding and softmax layers and report the compression ratios as |V |×dmodel2×TT-params .
4.1 SENTIMENT ANALYSIS
For this experiment, we have used the IMDB dataset (Maas et al., 2011) with two categories, and the Stanford Sentiment Treebank (SST) (Socher et al., 2013) with five categories. We have taken the most frequent 25000 words for the IMDB dataset and 17200 for SST, embedded them into a J–dimensional space using either standard embedding or TT–embedding layer, and performed classification using a standard bidirectional two–layer LSTM with hidden size h = 128, and dropout rate Pdrop = 0.5.
Our findings are summarized in Table 1. We observe that the models with largely compressed embedding layers can perform equally or even better than the full uncompressed models. This suggests that learning individual independent embeddings for each particular word is superfluous, as the expressive power of LSTM is sufficient to make use of these intertwined, yet more compact embeddings. Moreover, slightly better test accuracy of the compressed models in certain cases (e.g., for the SST dataset of a rather small size) insinuates that imposing specific tensorial low–rank structure on the embedding matrix can be viewed as a special form of regularization, thus potentially improving model generalization. A detailed and comprehensive test of this hypothesis goes beyond the scope of this paper, and we leave it for future work.
4.2 NEURAL MACHINE TRANSLATION
For this experiment, we have trained the Transformer-big model (dmodel = 1024, dff = 4096, h = 16) from (Vaswani et al., 2017) on WMT 2014 English–German dataset consisting of roughly 4.5 million sentence pairs. We evaluated on newstest2014 dataset using beam search with a beam size of 4 and no length penalty. We did not employ checkpoint averaging and used the last checkpoint to compute the BLEU score. Sentences were tokenized with YouTokenToMe2 byte-pair-encodings, resulting in a joint vocabulary of 32768 tokens. For the full list of hyperparameters, see Appendix C.
Our results are summarized in Table 2. We observe that even in this rather challenging task, both embedding and softmax layers can be compressed significantly, at the cost of a small drop in the
2https://github.com/VKCOM/YouTokenToMe
BLEU score. However, with the increase of compression factor, the performance deteriorates rapidly. Compared to the sentiment analysis, NMT is a much more complex task which benefits more from additional capacity (in the form of more powerful RNN or more transformer blocks) rather than regularization (Bahdanau et al., 2014; Vaswani et al., 2017; Wu et al., 2019), which may explain why we did not manage to improve the model by regularizing its embedding layers.
TT-embeddings induce 8% training iteration time overhead if compared to the baseline Transformerbig due to our current implementation heavy relying on slow torch.einsum function while standard embedding and softmax layers make use of fast and highly-optimized Tensor Cores for mixed-precision training. We expect a dedicated CUDA kernel to be much more efficient.
4.3 LANGUAGE MODELING
We took the Transformer-XL (Dai et al., 2019), an open source3 state-of-the-art language modeling architecture at the time of this writing, and replaced its embedding and softmax layers with TT– factorizations. Then, we tested different model configurations on the WikiText–103 (Merity et al., 2016) dataset and reported the results in Table 3. For the full list of hyperparameters, see Appendix C.
Compared to sentiment analysis and NMT, we were not able to achieve that high compression ratios for embedding and softmax layers in LM. However, even moderate 3.8 times compression allowed us to save 100M of weights at the cost of ∼ 1.5 perplexity drop.
4.4 CLICK THROUGH RATE PREDICTION
Among other applications of the TT–embedding layer, we chose to focus on CTR prediction, a popular task in digital advertising (He et al., 2014). We consider open dataset provided by Criteo for Kaggle Display Advertising Challenge (Criteo Labs, 2014) which consists of 39 categorical features, 45.8M samples and is binary labeled according to whether the user clicked on the given advertisement. Unique values of categorical features are bijectively mapped into integers. To reduce the memory footprint, if the size of a corresponding vocabulary is immense (e.g., a cardinality of some features in this dataset is of order 106), these integers are further hashed by taking modulus with respect to some fixed number such as 105. However, due to strong compression properties of TT–embeddings, this is not necessary for our approach, and we consider both full and hashed datasets in our experiments.
3https://github.com/kimiyoung/transformer-xl
CTR with the baseline algorithm The task at hand can be treated as a binary classification problem. As a baseline algorithm, we consider the neural network with the following architecture. First, each of the categorical features is passed through a separate embedding layer with embedding size J . After that, the embedded features are concatenated and passed through 4 fully-connected layers of 1024 neurons and ReLU activation functions. In all experiments, we used Adam optimizer with the learning rate equal to 0.0005. Since many input features have a large number of unique values (e.g., 10131227) and storing the corresponding embedding matrices would be costly, we employ the hashing procedure mentioned earlier.
CTR with TT–embeddings We substitute the embedding layers with the TT–embedding layers. Besides that, we leave the overall structure of the neural network unchanged with the same parameters as in the baseline approach. Table 4 presents the experimental results on the Criteo CTR dataset. To the best of our knowledge, our loss value is very close to the state-of-the-art result (Juan et al., 2016). These experiments indicate that the substitution of large embedding layers with TT–embeddings leads to significant compression ratios (up to 2011 times) with a slight improvement in the test loss, and up to 4200 with a small drop in the test loss. The total size of the compressed model does not exceed 20 Mb, while the baseline model weighs about 160 Mb. The obtained compression ratio suggests that the usage of TT–embedding layers may be beneficial in CTR prediction tasks.
5 DISCUSSION AND FUTURE WORK
We propose a novel embedding layer, the TT–embedding, for compressing huge lookup tables used for encoding categorical features of significant cardinality, such as the index of a token in natural language processing tasks. The proposed approach, based on the TT–decomposition, experimentally proved to be effective, as it heavily decreases the number of training parameters at the cost of a small deterioration in performance. In addition, our method can be easily integrated into any deep learning framework and trained via backpropagation, while capitalizing on reduced memory requirements and increased training batch size.
Our experimental results suggest several appealing directions for future work. First of all, TT– embeddings impose a concrete tensorial low-rank structure on the embedding matrix, which was shown to improve the generalization ability of the networks acting as a regularizer. The properties and conditions of applicability of this regularizer are subject to more rigorous analysis. Secondly, unlike standard embedding, we can introduce non-linearity into TT-cores to improve their expressive power (Khrulkov et al., 2019). Additionally, it is important to understand how the order of tokens in the vocabulary affects the properties of the networks with TT–embedding. We hypothesize that there exists the optimal order of tokens which better exploits the particular structure of TT–embedding and leads to a boost in performance and/or compression ratio. Finally, the idea of applying higher–order tensor decompositions to reduce the number of parameters in neural nets is complementary to more traditional methods such as pruning (Han et al., 2015) and quantization (Hubara et al., 2017; Xu et al., 2018). Thus, it would be interesting to make a thorough comparison of all these methods and investigate whether their combination may lead to even stronger compression.
A MULTIINDEX CONSTRUCTION
Algorithm 1 The algorithm implementing the bijection I(i) as described in Section 3.3. Require: I – vocabulary size, {Ik}Nk=1 – an arbitrary factorization of I , i – index of the target word in vocabulary. Returns: I(i) = (i1, . . . , iN ) – N -dimensional index. Initialize: L = {1, I1, I1I2, . . . , I1I2 . . . IN−1} for k = N to 1 do ik ← floor(i/L[k]) i← i mod L[k]
end for
Algorithm 2 The algorithm implementing the bijection (i1, . . . , iN )→ i, inverse to I(i). Require: I – vocabulary size, {Ik}Nk=1 – an arbitrary factorization of I , (i1, . . . , iN ) – N - dimensional index. Returns: i – index of the target word in vocabulary Initialize: L = {1, I1, I1I2, . . . , I1I2 . . . IN−1} i← 0 for k = 1 to N do i← i+ ik × L[k]
end for
B TENSOR RING EMBEDDING
Tensor Ring (TR) decomposition is a generalization to TT-decomposition where the first and the last cores are 3-dimensional tensors which corresponds to R0 = RN > 1. Formally, a tensor X is said to be represented in the TR format (Zhao et al., 2016) if each element of X can be computed as:
X (i1, i2, . . . , id) = R0∑
r0=1 R1∑ r1=1 · · · RN−1∑ rN−1=1 G(1)(r0, i1, r1)G(2)(r1, i2, r2) . . .G(N)(rN−1, iN , r0).
Similar to TT, we can define TR-matrix (see Figure 3) and corresponding TR-embedding layer.
Table 5 shows the performance of different NMT models with both embedding and softmax layers replaced by either TT or TR factorizations. To achieve the same compression factor as the corresponding TT models, TR models should have smaller ranks which negatively affects their performance. Furthermore, TR is more computationally heavy.
C COMPLETE LIST OF HYPERPARAMETERS | 1. What is the main contribution of the paper in the field of natural language processing?
2. What is the issue with traditional embedding matrices in NLP tasks, and how does the paper propose to address it?
3. What is the assumption made by the paper regarding the low-rank property of embedding matrices, and how does it impact the performance?
4. How does the proposed approach reduce the number of parameters in the neural network, and what is the trade-off between performance and computational efficiency?
5. Are there any concerns or suggestions for future work regarding the initialization and training methods for the proposed TensorTrain representation? | Review | Review
This paper proposes to use TensorTrain representation to transform discrete tokens/symbols to its vector representation.
Since neural networks can only work with numerical numbers, in many NLP tasks, where the raw inputs are in the discrete token/symbol format, the popular technique is to use "embedding" matrices to find a vector representation of those inputs.
As the authors point out, the embedding matrices usually require huge number of parameters, since it assigns one vector for each input token for one embedding vector, but to attain a competitive performance in the real world applications, we need to use large number of embedding vectors, which results in a large number of parameters in the neural networks.
The paper assumes that those embedding matrices can be compressed by assuming that the low-rank property of embedding matrices. I think this is a valid assumption in many cases, and the paper shows the performance degradation according to this assumption is relatively small compared to the gain, a dramatically reduced size of parameters in the embedding stage, is substantial.
I think the paper is well written and proposes a new direction to find a memory efficient representation of symbols. I am not sure the current initialization techniques, nor the training method in the paper are the right way to train a tensor train "embedding" but I expect that the authors would perform the follow up work on those topics. |
ICLR | Title
Efficient Learning of Safe Driving Policy via Human-AI Copilot Optimization
Abstract
Human intervention is an effective way to inject human knowledge into the loop of reinforcement learning, bringing fast learning and training safety. But given the very limited budget of human intervention, it is challenging to design when and how human expert interacts with the learning agent in the training. In this work, we develop a novel human-in-the-loop learning method called Human-AI Copilot Optimization (HACO). To allow the agent’s sufficient exploration in the risky environments while ensuring the training safety, the human expert can take over the control and demonstrate to the agent how to avoid probably dangerous situations or trivial behaviors. The proposed HACO then effectively utilizes the data collected both from the trial-and-error exploration and human’s partial demonstration to train a high-performing agent. HACO extracts proxy state-action values from partial human demonstration and optimizes the agent to improve the proxy values while reducing the human interventions. No environmental reward is required in HACO. The experiments show that HACO achieves a substantially high sample efficiency in the safe driving benchmark. It can train agents to drive in unseen traffic scenes with a handful of human intervention budget and achieve high safety and generalizability, outperforming both reinforcement learning and imitation learning baselines with a large margin. Code and demo videos are available at: https://decisionforce.github.io/HACO/.
1 INTRODUCTION
How to effectively inject human knowledge into the learning process is one of the key challenges to training reliable autonomous agents in safety-critical applications. In reinforcement learning (RL), researchers can inject their intentions into the carefully designed reward function. The learning agent freely explores the environment to collect the data and develops the desired behaviors induced by the reward function. However, RL methods bear two drawbacks that limit their applications in safety-critical tasks: First, the nature of trial-and-error exploration exposes RL agent to dangerous situations (Saunders et al., 2017). Second, it is difficult to summarize all the intended behaviors to be learned into the reward function. Taking the driving vehicle as an example, an ideal policy should obtain a set of skills, such as overtaking, yielding, emergent stopping, and negotiation with other vehicles. It is intractable to manually design a reward function that leads to the emergence of all those behaviors in the trained agent. To mitigate these two challenges, practitioners enforce the human intentions through imitation learning (IL) where the agent is trained to imitate the expert-generated state and action sequences. During the demonstration, the premature agent does not interact with the risky environment and thus the training safety is ensured. High-quality expert demonstrations provide direct the optimal solution for the agent to imitate from. However, IL paradigm suffers from the distributional shift problem (Ross & Bagnell, 2010; Ross et al., 2011) while the induced skills are not sufficiently robust with respect to changes in the control task (Camacho & Michie, 1995).
Different from vanilla RL or IL, human-in-the-loop learning is an alternative paradigm to inject human knowledge, where a human subject accompanies the agent and oversees its learning process. Previous works require the human to either passively advise which action is good (Mandel et al., 2017) or evaluate the collected trajectories (Christiano et al., 2017; Guan et al., 2021; Reddy et al.,
∗Quanyi Li and Zhenghao Peng contribute equally to this work.
2018; Warnell et al., 2018; Christiano et al., 2017; Sadigh et al., 2017; Palan et al., 2019). This kind of passive human involvement exposes the human-AI system to risks since the agent explores the environment without protection. Some other works require the human to merely intervene in the exploration by terminating the episode (Saunders et al., 2018; Zhang & Cho, 2016), but it is not practical to terminate and reset the environment instantly in the real world (Xu et al., 2020). Intervening and taking over the control from the learning agent is a natural approach to safeguard the human-AI system (Kelly et al., 2019; Spencer et al., 2020). However, a challenge exhibited in previous works is the budget of human intervention. Since human cognitive resource is precious and limited, it is essential to carefully design when and how the human expert involves in the learning process so that the human knowledge can be injected effectively.
In this work, we propose an efficient human-in-the-loop learning method called Human-AI Copilot Optimization (HACO). The key feature of HACO is that it can learn to minimize the human intervention and adjust the level of automation to the learning agent adaptively during the training. As shown in Figure 1 A, HACO allows the human expert to take over the human-AI system in a proactive manner. If the human decides to intervene in the action of the agent, he/she should demonstrate the correct actions to overcome current undesired situations to the learning agent. The human intervention and the partial demonstration are two sources of informative training data. We use offline RL technique to maintain a proxy value function of the human-AI mixed behavior policy even though the agent doesn’t have the access to the environmental reward during training. To encourage the exploration in the state-action space permitted by human, we also maximize the entropy of action distribution of the agent if the agent is not taken over.
Experiments in the virtual driving environments MetaDrive (Li et al., 2021) and CARLA (Dosovitskiy et al., 2017) show that, with an economic human budget, HACO outperforms RL and IL baselines with a substantial margin in terms of sample efficiency, performance, safety, and generalizability in the unseen testing environment. Thus the human-AI copilot optimization is an efficient learning paradigm to inject human knowledge in an online setting.
2 RELATED WORK
Learning from Demonstration. Passive imitation learning such as behavior cloning (Widrow, 1964; Osa et al., 2018; Huang et al., 2020; Sun et al., 2020) and recently proposed offline RL methods (Kumar et al., 2020; Fujimoto et al., 2018; Wu et al., 2019) train agents from an outof-the-shelf data set and guarantee the training safety, since no interaction with the environment is needed. Inverse RL methods (Ng et al., 2000; Abbeel & Ng, 2004; Fu et al., 2017; Bloem & Bambos, 2014) learn a reward function from the human demonstration and then use it to incentivize the agents to master the intended behaviors. Proposed more recently, GAIL (Ho & Ermon, 2016) and its variants (Song et al., 2018; Sasaki et al., 2018; Kostrikov et al., 2018) and SQIL (Reddy et al., 2019) compare the trajectory similarity between agents and humans and thus require the agent to interact with the environment. Similar to RL methods, this paradigm exposes the agent to potentially dangerous situations.
Human-in-the-loop Learning Methods. Many works focus on incorporating human in the training loop of conventional RL or IL paradigms. DAgger (Ross et al., 2011) and its extended methods (Kelly et al., 2019; Zhang & Cho, 2016; Hoque et al., 2021) correct the compounding error (Ross & Bagnell, 2010) of behavior cloning by periodically requesting expert to provide more demonstration. Instead of proving demonstration upon requests, Human-Gated DAgger (HG-DAgger) (Kelly et al., 2019), Expert Intervention Learning (EIL) (Spencer et al., 2020) and Intervention Weighted Regression (IWR) (Mandlekar et al., 2020) empower the expert to intervene exploration and carry the agent to safe states. However, these methods do not impose constraints to reduce human intervention and do not utilize the data from the free exploration of the agent. Human subjects can also involve in the loop providing preferences based on evaluative feedback on two behavior sequences generated by the agent (Christiano et al., 2017; Sadigh et al., 2017; Palan et al., 2019; Ibarz et al., 2018; Cui & Niekum, 2018).
Human-AI copilot or shared autonomy is a more intimate form of the human-in-the-loop methods. The AI agent and human are working together simultaneously to achieve a common goal. By giving human guidance and feedback at run-time instantly, the explorable state and action spaces can be greatly narrowed down (Saunders et al., 2018). The learning goal can further match the task objective by providing extra human feedback combined with reward function (Reddy et al., 2018; Warnell et al., 2018; Wu et al., 2021; Cederborg et al., 2015; Arumugam et al., 2019). Human-AI copilot is helpful and practical when applying RL to real world tasks where safety constraints must be satisfied (Garcıa & Fernández, 2015; Amodei et al., 2016; Bharadhwaj et al., 2020; Alshiekh et al., 2018). In our previous work (Peng et al., 2021), we made attempt to develop a method called Expert-Guided Policy Optimization (EGPO) where a PPO expert policy is involved to monitor the learning agent. The difference can be summarized as twofold: (1) We substitute the expert with human and design special mechanism to mitigate the delayed feedback error; (2) Based on the comprehensive ablation study and prototyping, we remove redundant designs like takeover function and the need of reward function, making the proposed method simple yet effective.
Reducing human burden is a major challenge in human-in-the-loop methods. A feasible solution is to learn an intervention function that imitates human intervention signals and stops the catastrophic actions of agents (Kelly et al., 2019; Zhang & Cho, 2016; Saunders et al., 2017; Abel et al., 2017), which can relieve the mental stress of the human subject during training. In this work, we devise our learning scheme explicitly to include the human cognitive cost as one of the objectives to minimize.
3 HUMAN-AI COPILOT OPTIMIZATION
In this section, we introduce Human-AI Copilot Optimization (HACO), an efficient learning algorithm that trains agents from human interventions, partial demonstrations and free exploration. For human-in-the-loop learning, it is essential to design when and how to engage human subjects. The major issue is the cognitive cost of the human subject (Zhang et al., 2021). Frequent querying might bring tremendous cognitive cost and exhaust the human expert, causing incorrect or delayed feedback that hinders the training. Thus the proposed pipeline aims to minimize the human intervention cost during the training, which reduces the reliance on the expert’s demonstration over time and improves the learning agent’s autonomy. The overall workflow of HACO is presented in Algorithm 1.
3.1 HUMAN-AI COPILOT TRAINING PARADIGM
We aim to learn an autonomous agent with policy πn(an|s) that can make informed action an in state s. As shown in Fig. 1, we frame the human-AI copilot paradigm that extends the standard reinforcement learning diagram by incorporating a human expert. At each step, the human expert oversees current state and decides whether to intervene. If necessary, he/she will execute human action ah to overwrite the agent’s action an. We denote the human intervention by a Boolean indicator I(s, an) and thus the action applied to the environment is called the safe action â = I(s, an)ah + (1 − I(s, an))an. Denoting the human policy as πh, the actual trajectories occurred during training are derived from a shared behavior policy πb:
πb(a|s) = πn(a|s)(1− I(s, a)) + πh(a|s)G(s), (1) wherein G(s) = ∫ a′∈A I(s, a ′)πn(a ′|s)da′ is the probability of the agent choosing an action that will be rejected by the human.
We call the transition sequences during the takeover {(st, an,t, ah,t, I(st, an,t), st+1), ...} as the partial demonstration. The partial demonstration and the free exploration transitions will be recorded in the replay buffer B and fed to the training pipeline. Note that we do not require to store environmental reward and cost into the buffer since the proposed method does not need them.
In the human-AI copilot training, the human is obligated to guide the agent learning and safeguard the learning process by proactively taking over the control if necessary. This paradigm rules out the dispensable states and mitigates the safety concern in free exploration of RL and active imitation learning methods (Ross et al., 2011). Different from previous offline RL works training from fixed dataset (Bojarski et al., 2016; Ho & Ermon, 2016; Reddy et al., 2019; Kumar et al., 2020; Fujimoto et al., 2018; Wu et al., 2019) where no closed loop feedback is accessible, the human-AI copilot training produces partial demonstrations that contains the necessary human knowledge to overcome dangerous situations into the learning. The copilot nature alleviates the distributional shift problem, since the human intervenes when the agent performs suspicious behaviors, so that there is a continuity of the state visitation between the agent and the expert.
In next section, we will introduce how we instantiate the human-AI copilot paradigm with a humanefficient algorithm that can effectively optimize the agent toward safe and high-performing policy.
3.2 LEARNING OBJECTIVES
We form three objectives that fully utilize the human data: (1) Agent should maximize a proxy value functionQ(s, a) which reflects human intentions on how to finish the task. (2) Agent should explore thoroughly to visit the state-action subspace permitted by the human. Concretely, we maximize the action distribution entropyH(π(·|s)). (3) Agent should maximize the level of automation and reduce human intervention. Episodic human intervention is estimated by an intervention value function QI(s, a) based on the step-wise intervention cost C(s, a). Thus the overall learning objective of HACO becomes:
max π
E[Q(s, a) +H(π)−QI(s, a)]. (2)
We then discuss the practical implementation of aforementioned design goals.
Proxy value function. HACO follows reward-free setting so we can’t estimate the expected stateaction value based on a ground-truth reward function defined by the environment. We instead estimate a proxy value function Q(s, a;φ) (φ is model parameters) that captures the ordinal preference of human experts, which implicitly reflects human intentions. We utilize the conservative Q-learning (Kumar et al., 2020) and form the optimization problem of the proxy value function as:
min φ E (s,an,ah,I(s,an))∼B [I(s, an)(Q(s, an;φ)−Q(s, ah;φ))]. (3)
The above optimization objective can be interpreted as being optimistic to the human’s action ah and pessimistic to the agent’s action an. The proxy value function learns to represent the high-value state-action subspace preferred by the human expert.
Entropy regularization. If the learning agent visits human-preferable subspace insufficiently during free explorable sampling, the states evoking high proxy value are rarely encountered, making the back-propagation of the proxy value to preceding states difficult and thus damaging the learning. To encourage exploration, we adopt the entropy regularization technique in (Haarnoja et al., 2018) and forms auxiliary signal to update the proxy value function apart from Eq. 3:
min φ E (st,ât,st+1)∼B [y −Q(st, ât;φ)]2, y = γ E a′∼πn(·|st+1) [Q(st+1, a ′;φ′)− α log πn(a′|st+1)],
(4) wherein ât is the executed action at state st, φ′ denotes the delay updated parameter of the target network, γ is the discount factor. Since the environment reward is not accessible to HACO, we remove the reward term in the update target y. Combining Eq. 3 and Eq. 4, the formal optimization objective of the proxy value function becomes:
min φ E B [(y −Q(st, ât;φ))2 + I(st, an,t)(Q(st, an,t;φ)−Q(st, ah,t;φ))]. (5)
Algorithm 1: The workflow of HACO during training 1 Initialize an empty replay buffer B 2 while Training is not finished do 3 while Episode is not terminated do 4 an,t ∼ πn(·|st) Retrieve agent’s action 5 I(st, an,t)← Human expert decides whether to intervene by observing current state st 6 if I(st, an,t) is True then 7 ah,t ← πh(·|st) Retrieve human’s action 8 Apply ah,t to the environment 9 else
10 Apply an,t to the environment 11 if I(st, an,t) is True and I(st−1, an,t−1) is False then 12 C(st, an,t)← Compute intervention cost following Eq. 6 13 else 14 C(st, an,t)← 0 Set intervention cost to zero 15 Record st, an,t, I(st, an,t) and ah,t (if I(st, an,t)) to the buffer B 16 Update proxy value Q, intervention value QI and policy π according to Eq. 5, Eq. 7, Eq. 8
respectively
Reducing human interventions. Directly optimizing the agent policy according to the proxy value function will lead to failure when evaluating the agent without human participation. This is because Q(s, a) represents the proxy value of the mixed behavior policy πb instead of the learning agent’s πn due to the existence of human intervention. It is possible that the agent learns to deliberately abuse human intervention by always taking actions that violate human intentions, such as driving off the road when near the boundary, which forces human to take over and provide demonstrations. In this case, the level of automation for the agent is low and the human subject exhausts to provide demonstrations. Ablation study result in Table 2(c) illustrates this phenomenon.
To economically utilize human budget and reduce the human interventions over time, we punish the agent action that triggers human intervention in a mild manner by using the cosine similarity between agent’s action and human’s action as the intervention cost function in the form below:
C(s, an) = 1− an
Tah
||an||||ah|| , ah ∼ πh(·|s). (6)
The agent will receive large penalty only when its action is significantly different from the expert action in terms of cosine similarity.
A straightforward form of C is a constant +1 when human expert issues intervention. However, we find that there usually exists temporal mismatch between human intervention and faulty actions so that the intervention cost is given to the agent at a delayed time step t + . It is possible that the agent’s action an,t+ is a correct action that saves the agent itself from dangers but is mistakenly marked as faulty action that triggers human intervention. In the ablation study, we find that using the constant cost raises inferior performance compared to the cosine similarity.
As shown in Line 11-14 of Algorithm 1, we only yield non-zero intervention cost at the first step of human intervention. This is because the human intervention triggered by the exact action an,t indicates this action violates the underlying intention of human at this moment. Minimizing the chance of those actions will increase the level of automation.
To improve the level of automation, we form an additional intervention value function QI(s, a) as the expected cumulative intervention cost, similar to estimating the state-action value in Q-learning through Bellman equation:
QI(st, an,t) = C(st, an,t) + γ E st+1∼B,at+1∼πn(·|st+1) [QI(st+1, at+1)]. (7)
This value function is used to directly optimize the policy.
Learning policy. Using the entropy-regularized proxy value function Q(s, a) as well as the intervention value function QI(s, a), we form the the policy improvement objective as:
max θ E st∼B
[Q(st, an)− α log πn(an|st; θ)−QI(st, an)], an ∼ πn(·|st; θ). (8)
4 EXPERIMENTS
4.1 EXPERIMENTAL SETTINGS
Task. We focus on the driving task in this work. This is because driving is an important decision making problem with a huge social impact, where safety and training efficiency are critical. Since many researches on autonomous driving employ human in a real vehicle (Bojarski et al., 2016; Kelly et al., 2019), the human safety and human cognitive cost become practical challenges that limit the application of learning-based methods in industries. Therefore, the driving task is an ideal benchmark for the human-AI copilot paradigm.
Simulator. Considering the potential risks of employing human subjects in physical experiments, we benchmark different approaches in the driving simulator. We employ a lightweight driving simulator MetaDrive (Li et al., 2021), which preserves the capacity to evaluate the safety and generalizability in unseen environments. The simulator is implemented based on Panda3D (Goslin & Mine, 2004) and Bullet Engine that has high efficiency as well as accurate physics-based 3D kinetics. MetaDrive uses procedural generation to synthesize an unlimited number of driving maps for the split of training and test sets, which is useful to benchmark the generalization capability of different approaches in the context of safe driving. Some generated driving scenes are presented in Fig. 2. The simulator is also extremely efficient and flexible so that we can run the human-AI copilot experiment in real-time. Though we mainly describe the setting of MetaDrive in this section, we also experiment on CARLA (Dosovitskiy et al., 2017) simulator in Sec. 4.3.
Training Environment. In the simulator, the task for the agent is to steer the target vehicle with low-level control signal, namely acceleration, brake and steering, to reach the predefined destination and receive a success flag. The ratio of episodes where the agent successfully reaches the destination is called the success rate. To increase the difficulty of the task, we scatter obstacles randomly in each driving scene such as movable traffic vehicles, fixed traffic cones, and warning triangles.
The observation contains (1) the current states such as the steering, heading, velocity and relative distance to boundaries etc., (2) the navigation information that guides the vehicle toward the destination, and (3) the surrounding information encoded by a vector of 240 Lidar-like distance measures of the nearby vehicles.
Though HACO does not receive environmental reward during training, we provide reward function to train baseline methods and evaluate HACO in test time. The reward function contains a dense driving reward, speed reward and a sparse terminal reward. The driving reward measures the longitudinal movement toward destination. We also reward agent according to its velocity and give the sparse reward +20 when the agent arrives at the destination.
Each collision to the traffic vehicles or obstacles yields +1 environmental cost. Note that HACO can not access this cost during the training. This cost is used to train safe RL baselines as well as for testing the safety of trained policies. We term the episodic cost as safety violation which is the measurement on the safety of a policy.
We invite the human expert to supervise the real-time exploration of the learning agent with hands on the steering wheel, as shown in the Fig. 1B. When a dangerous situation is going to happen, the human takes over the vehicle by pressing the paddle besides the wheel and starts controlling the vehicle by steering the wheel and stepping the pedals.
Split of training and test sets. Different from the conventional RL setting where the agent is trained and tested in the same fixed environment, we focus on evaluating the generalization performance through testing the trained agents in separated test environments. We split the driving scenes into the training set and test set with 50 different scenes in each set. After each training iteration, we roll out the learning agent without guardian in the test environments and record success rate and safety violation given by the environment and present it in Table 1.
* During HACO training, in 8316 ± 497.90 steps out of the total 30K steps the human expert intervenes and overwrites the agent’s actions. The whole training takes about 50 minutes.
Implementation details. We conduct experiments on the driving simulator and implement algorithms using RLLib (Liang et al., 2018), an efficient distributed learning system. When training the baselines, we host 8 concurrent trials in an Nvidia GeForce RTX 2080 Ti GPU. Each trial consumes 2 CPUs with 8 parallel rollout workers. Except human-in-the-loop experiments, all baseline experiments are repeated 5 times with different random seeds. The main experiments of HACO is conducted on a local computer with an Nvidia GeForce RTX 2070 and repeat 3 times. The ablations and baseline human-in-the-loop experiments repeat once due to the limited human budget. One human subject participates in each experiment. In all tables and figures, we provide the standard deviation if the experiments are repeated multiple runs with different random seeds. Information about other hyper-parameters is given in the Appendix.
4.2 BASELINE COMPARISON
We compare our method to vanilla RL and Safe RL methods which inject the human intention and constraint through the pre-defined reward function and cost function. We test native RL methods, PPO (Schulman et al., 2017) and SAC (Haarnoja et al., 2018), with cost added to the reward as auxiliary negative reward, called reward shaping (RS). Three common safe RL baselines Constraint Policy Optimization (CPO) (Achiam et al., 2017), PPO-Lagrangian (Stooke et al., 2020), SACLagrangian (Ha et al., 2020) are evaluated.
Apart from the RL methods, we also generate a human demonstration dataset containing one-hour expert’s demonstrations where there are about 36K transitions in the training environments. For the high-quality demonstrations in the dataset, the success rate of the episodes reaches 98% and the safety violation is down to 0.16. Using this dataset, we evaluate passive IL method Behavior Cloning, active IL method GAIL (Ho & Ermon, 2016) and offline RL method CQL (Kumar et al., 2020). We also run the Human-Gated DAgger (HG-DAgger) (Kelly et al., 2019) and Intervention Weighted Regression (IWR) (Mandlekar et al., 2020) as the baselines of human-in-the-loop methods based on this dataset and the human-AI copilot workflow.
Training-time Safety. The training-time safety is measured by the total training safety violation, the total number of critical failures occurring in the training. Note that the environmental cost here is different from the human intervention cost in HACO. As illustrated in Table 1 and Fig. 3A, HACO achieves huge success in training time safety. Apart from the empirical results, we provide proof to show the training safety can be bound by the guardian in Appendix. Under the protection of the human expert, HACO yields only 30.14 total safety violations in the whole training process, two orders of magnitude better than other RL baselines, even though HACO does not access the environmental cost. IWR and HG-DAgger also achieve drastically lower training safety violations, showing the power of human-in-the-loop methods. The most competitive RL baseline SAC-RS, which achieves similar test success rate, causes averagely 2767.77 training safety violations which are much higher
than HACO. The active IL method GAIL also has significantly higher safety violations than HACO and its performance is unsatisfactory.
From the perspective of safety, we find that the reward shaping technique is inferior compared to the Lagrangian method, both for SAC and PPO variants. PPO causes more violations than SAC, probably due to the relatively lower sample efficiency and slower convergence speed.
Sample Efficiency and Human Cognitive Cost. The human-AI system is not only protected so well by the human, but achieves superior sample efficiency with limited data usage. As shown in Fig. 3A and Table 1, we find that HACO is an order of magnitude more efficient than RL baselines. HACO achieves 0.83 test success rate by merely interacting with the environment in the 30K steps, wherein only averagely 8,316 steps the human provides safe actions as demonstration. During nearly 50 minutes of human-AI copilot, there are only 27% steps that the human provides demonstrations.
Human-in-the-loop baselines IWR and HG-DAgger consume 50K steps of human budget and only IWR can achieve satisfactory success rate. By prioritizing samples from human intervention, IWR manages to learn key actions from human intervention to escape dangerous situations caused by the compounding error. Without re-weighting the human takeover data, HG-Dagger fails to learn from a few but important human demonstrations. The learning curves of these two methods can be found in the Appendix.
Unlike the success of HACO, all the learning-from-demonstration methods fail with the dataset containing 36K transitions. Compared to IL methods which optimize agents to imitate exact actions at each time step, HACO considers the learning on the trajectory basis. We incentivize the agent to choose an action that can bring potential return in future trajectory, instead of only mimicking the expert’s behaviors at each step. On the other hand, HACO gathers expert data in an online manner through human-AI copilot, which better mitigates the distributional shift severe in offline RL methods.
Learning Dynamics. The intervention minimization mechanism in HACO reduces human cognitive cost. As shown in Fig. 3B, the takeover rate gradually decreases in the course of learning. The curve of episodic intervention cost suggests that the human intervention frequency becomes lower and the similarity between agent’s action and human’s action increases. We also provide visualization of the learned proxy value function in the Appendix, showing that the learning scheme of HACO can effectively encode human preference into the proxy values.
4.3 ABLATION STUDY
Takeover Policy Analysis. We request the human subjects to try two intervention strategies. The first is to take over in a low frequency and produce a long trajectory at each intervention. In this way the intervention cost becomes sparse. The other strategy is to intervene more frequently and provide fragmented demonstrations. In Table 2(a), the experiment shows that the proposed HACO works better with dense human intervention signals. Agent trained with long trajectories achieves inferior success rate and episodic reward than agents trained with dense intervention signals.
Cosine Similarity Cost Function. As shown in Table 2(b), we replace the intervention cost function in Eq. 6 to a constant value +1 if human intervention happens. We find the agent learns to stay in the spawn points and does not move at all in test time. As discussed in Sec. 3.2, it is possible that the human intervenes in incorrect timing. This makes agent fail to identify how to drive correctly. Using the negative cosine similarity to measure the divergence between agent and human’s actions alleviates this phenomenon since the human intervention penalty is down-weighted when the agent provides action that adheres human intention.
Intervention Minimization. As shown in Table 2(c), when removing the intervention minimization mechanism, the agent drives directly toward the boundary. This is because the agent learns to abuse human expert to take over all the time, which increases proxy values but causes consistent out-ofthe-road failures in testing. This result shows the importance of intervention minimization.
CARLA Experiment. To test the generality of HACO, we run HACO in the CARLA simulator (Dosovitskiy et al., 2017). We use the top-down semantic view provided by CARLA as the input and a 3-layer CNN as the feature extractor for HACO and the PPO baseline. For PPO, the reward follows the setting described in CARLA and is based on the velocity and the completion of the road. We train HACO (with a human expert) and PPO in CARLA town 1 and report the test performance in CARLA town 2. Table 3 shows that the proposed HACO can be successfully deployed in the CARLA simulator with visual observation and achieve comparable results. Also, it can train the driving agent with a new CNN feature-extractor in 10 minutes with only 8,000 samples in the environment. The video is available at: https://decisionforce.github.io/HACO/.
5 CONCLUSION
We develop an efficient human-in-the-loop learning method, Human-AI Copilot Optimization (HACO), which trains agents from the human interventions and partial demonstrations. The method incorporates the human expert in the interaction between agent and environment to ensure safe and efficient exploration. The experiments on safe driving show that the proposed method achieves superior training-time safety, outperforming RL and IL baselines. Besides, it shows a high sample efficiency for rapid learning. The constrained optimization technique is used to prevent the agent from excessively exploiting the human expert, which also decreases the takeover frequency and saves valuable human budget.
One limitation of this work is that the trained agents behave conservatively compared to the agents from RL baselines. Aiming to ensure the training time safety of the copilot system, human expert typically slow the vehicle down to rescue it from risky situations. This makes the agent tend to drive slowly and exhibit behaviors such as frequent yielding in the intersection. In future work, we will explore the possibility of learning more sophisticated skills.
Acknowledgments This project was supported by the Centre for Perceptual and Interactive Intelligence (CPII) Ltd under InnoHK supported by the Innovation and Technology Commission.
ETHICS STATEMENT
The proposed Human-AI Copilot Optimization algorithm aims at developing a new human-friendly human-in-the-loop training framework. We successfully increase the level of automation after human-efficient training. We believe this work has a great positive social impact which advances the development of more intelligent AI systems that costs less human burdens.
We employ human subjects to participate in the experiments. Human subjects can stop the experiment if any discomfort happens. No human subjects were harmed in the experiments since we test in the driving simulator. The human subjects earn an hourly salary more than average in our community. Each experiment lasts near one hour. Human participants will rest at least three hours after one experiment. During training and data processing, no personal information is revealed in the collected dataset or the trained agents.
A MAIN THEOREM AND THE PROOF
In this section, we derive the upper bound of the discounted probability of failure of HACO, showing that we can bound the training safety with the guardian. Theorem 1 (Upper bound of training risk). The expected cumulative probability of failure Vπb of the behavior policy πb in HACO is bounded by the error rate of the human expert action , the error rate of the human expert intervention κ and the tolerance of the human expert K ′:
Vπb ≤ 1
1− γ [ + κ+
γ 2
1− γ K ′],
wherein K ′ = maxsK(s) = maxs ∫ a∈Ah(s) da ≥ 0 is called human expert tolerance.
The human expert tolerance K ′ will becomes larger, if human relieves its intervention and allows the agent to explore the environment more freely.
The proof is given as follows.
Notations. Before starting, we firstly recap and describe the notations. In HACO, a human subject copilots with the learning agent. The agent’s policy is πn, the human’s policy is πh. Both policies produces action in the bounded action space A ∈ R|A|. The human expert decides to intervention under certain state and agent’s action an. The human intervention is denoted by a Boolean function: I(s, a). The mixed behavior policy πb that produces the real actions applied to the environment is denoted as: πb(a|s) = πn(a|s)(1− I(s, a)) + πh(a|s)G(s), (9) wherein G(s) = ∫ a′∈A I(s, a ′)πn(a ′|s)da′ is a function which denotes the probability of choosing an action that will be rejected by the human.
Therefore, at a given state, we can split the action space into two parts: where intervention will happen or will not happen if the agent sample action in it. We denote the confident action space as:
Ah(s) = {a : I(a|s) is False}. (10) The confident action space contains the actions that will not be rejected by human expert at state s.
We also define the ground-truth indicator Cgt denoting whether the action will lead to unsafe state. This unsafe state is determined by the environment and is not revealed to learning algorithm:
Cgt(s, a) = { 1, if s′ = P(s′|s, a) is an unsafe state, 0, otherwise.
(11)
Therefore, at a given state s the step-wise probability of failure for arbitrary policy π is:
E a∼π(·|s)
Cgt(s, a) ∈ [0, 1]. (12)
Now we denote the cumulative discounted probability of failure as:
Vπ(st) = E τ∼π ∑ t′=t γt ′−tCgt(st′ , at′), (13)
which counts for the chance of entering dangerous states in current time step as well as in future trajectories deduced by the policy π. We use Vπh = Eτ∼πh Vπh(s0) to denote the expected cumulative discounted probability of failure of the human. Following the same definition as Vπh , we can also write the expected cumulative discounted probability of failure of the behavior policy as: Vπb = Eτ∼πb Vπb(s0) = Eπb ∑ t=0 γ tCgt(st, at).
Assumption. Now we introduce two important assumptions on the human expert.
Assumption 1 (Error rate of human action). For all states, the step-wise probability of that the human expert produces an unsafe action is bounded by a small value < 1:
E a∼πh(·|s)
Cgt(s, a) ≤ . (14)
Assumption 2 (Error rate of human intervention). For all states, the step-wise probability of that the human expert does not intervene when agent produces an unsafe action is bounded by a small value κ < 1: ∫
a∈A
[1− I(s, a)]Cgt(s, a)da = ∫
a∈Ah(s)
Cgt(s, a)da ≤ κ. (15)
These two assumptions does not impose any constrain on the structure of the human expert policy.
Lemmas. We propose several useful lemmas and the correspondent proofs, which are used in the main theorem.
Lemma 2 (The performance difference lemma).
Vπb = Vπh + 1
1− γ Es∼Pπb E a∼πb [Aπh(s, a)]. (16)
Here the Pπb means the states are subject to the marginal state distribution deduced by the behavior policy πb. Aπh(s, a) is the advantage of the expert in current state action pair: Aπh(s, a) = Cgt(s, a) + γVπh(s
′) − Vπh(s) and s′ = P(s, a) is the next state. This lemma is proposed and proved by Kakade & Langford (2002) and is useful to show the behavior policy’s safety. In the original proposition, the V and A represents the expected discounted return and advantage w.r.t. the reward, respectively. However, we replace the reward with the indicator Cgt so that the value function Vπb and Vπh presenting the expected cumulative failure probability.
Lemma 3. The cumulative probability of failure of the expert Vπh(s) is bounded for all state:
Vπh(s) ≤
1− γ
Proof. Following Assumption 1:
Vπh(st) = E πh [ ∞∑ t′=t γt ′−tCgt(st′ , at′)] = ∞∑ t′=t γt ′−t E πh [Cgt(st′ , at′)] ≤ ∞∑ t′=t γt ′−t = 1− γ (17)
Theorem. We introduce the main theorem of this work above, which shows that the training safety is related to the error rate on action and the error rate on intervention κ of the human expert. The proof is given as follows.
Proof. We firstly decompose the advantage by splitting the behavior policy:
E a∼πb(·|s) Aπh(s, a) = ∫ a∈A πb(a|s)Aπh(s, a)
= ∫ a∈A {πn(a|s)(1− I(s, a))Aπh(s, a) + πh(a|s)G(s)Aπh(s, a)}da
= ∫ a∈Ah(s) [πn(a|s)Aπh(s, a)]da+G(s) E a∼πh [Aπh(s, a)].
(18)
The second term is equal to zero according to the definition of advantage. We only need to compute the first term. We expand the advantage into detailed form, we have:
E a∼πb(·|s) Aπh(s, a) = ∫ a∈Ah(s) [πn(a|s)Aπh(s, a)]da
= ∫ a∈Ah(s) πn(a|s)[Cgt(s, a) + γVπh(s′)− Vπh(s)]da
= ∫ a∈Ah(s) π(a|s)Cgt(s, a)da
︸ ︷︷ ︸ (a)
+ γ ∫ a∈Ah(s) π(a|s)Vπh(s′)da
︸ ︷︷ ︸ (b)
− ∫
a∈Ah(s)
π(a|s)Vπh(s)da
︸ ︷︷ ︸ (c)
.
(19)
Following the Assumption 1, the term (a) can be bounded as:
∫ a∈Ah(s) π(a|s)Cgt(s, a)da ≤ ∫ a∈Ah(s) Cgt(s, a)da ≤ κ. (20)
Following the Lemma 3, the term (b) can be written as:
γ ∫ a∈Ah(s) π(a|s)Vπh(s′)da ≤ γ ∫ a∈Ah(s) Vπh(s ′)da ≤ γ 1− γ ∫ a∈Ah(s) da = γ 1− γ K(s), (21)
wherein K(s) = ∫ a∈Ah(s) da denoting the area of human-preferable region in the action space. It is a function related to the human expert and state.
The term (c) is always non-negative, so after applying the minus to term (c) the negative term will always be ≤ 0. Aggregating the upper bounds of three terms, we have the bound on the advantage:
E a∼πb
Aπh(s, a) ≤ κ+ γ
1− γ K(s) (22)
Now we put Eq. 22 as well as Lemma 3 into the performance difference lemma (Lemma 2), we have:
Vπb = Vπh + 1
1− γ Es∼Pπb E a∼πb [Aπh(s, a)]
≤
1− γ +
1
1− γ [κ+
γ
1− γ max s K(s)]]
= 1
1− γ [ + κ+
γ 2
1− γ K ′],
(23)
wherein K ′ = maxsK(s) = maxs ∫ a∈Ah(s) da ≥ 0 is correlated to the tolerance of the expert. If the human expert has higher tolerance then K ′ should be greater.
Now we have proved the upper bound of the discounted probability of failure for the behavior policy in our method.
B VISUALIZATION OF LEARNED PROXY VALUE FUNCTION
To understand how well the proxy value function learns, we visualize 4 common scenarios in 4 pairs of figures as shown above. The left sub-figure of each pair shows a top-down view of a driving scenario, where a sequence of snapshots of the control vehicle is plotted, showing its trajectory. The right sub-figure of each pair overlaps the heatmap of proxy values in the top-down image. We manually position the vehicle in different location in the map and query the policy to get action and run the proxy Q function to get the value Q(s, a). Region in red color indicates the proxy value is low if the agent locates there and vice versa.
In Fig. 4(a), the agent performs a lane change behavior to avoid potential collisions with a traffic vehicle which is merging into the middle lane. The region near the traffic vehicle has extremely low values and thus the agent has small probability to enter this area.
In Fig. 4(b), traffic cones spread in the left lane. The agent learns to avoid crashes and the proxy value heatmap shows a large region of low values.
As shown in the trajectory in Fig. 4(c), though the agent can choose to bypass the traffic vehicle in both left-hand side or right-hand side, it chooses the right-hand side. The heatmap shows that much higher proxy Q value is produced on right bypassing path compared to left path. This behavior resembles the preference of human who prefers right-hand side detour.
In addition, in some ares where paths boundary is ambiguous such as the intersection, the agent manages to learn a virtual boundary in the proxy Q space for efficiently passing these areas, as shown in the Fig. 4(d).
The proxy Q value distribution shown in this section not only explains the avoidance behaviors, but also serves as a good indicator for the learned human preference.
C DETAILS OF HUMAN-IN-THE-LOOP BASELINES
We benchmark the performance of two human-in-the-loop methods HG-DAgger (Kelly et al., 2019) and IWR (Mandlekar et al., 2020). Both methods require warming up through behavior cloning on a pre-collected dataset. In practice, we find that using 10K or 20K steps of human collected data is not enough to initialize the policy with basic driving skills. Therefore, we use the pre-collected human dataset containing 30K transitions to warm up the policies. After warming up, HG-DAgger and IWR then aggregate human intervention data to the training buffer and conduct behavior cloning again to update policy for 4 epochs. In each epoch the human-AI system collects 5000 transitions. The above figure shows the learning curves of IWR and HG-DAgger. As discussed in the main body of paper, we credit the success of IWR to the re-weighting of human intervention data, which is not emphasized in HG-DAgger.
D MORE ZOOM-IN PLOT OF THE LEARNING CURVES
The above figures present the zoomed in learning curves of RL baselines and HACO, showing the superior sample efficiency of HACO compared to RL baselines.
E HYPER-PARAMETERS
Table 4: HACO
Table 5: PPO/PPO-Lag
Hyper-parameter Value
Discounted Factor γ 0.99 τ for target network update 0.005 Learning Rate 0.0001 Environmental horizon T 1500 Steps before Learning start 10000
Cost Limit for SAC-Lag 1
BC iterations for CQL 200000 CQL Loss Temperature β 5 Min Q Weight Multiplier 0.2
Table 7: BC
Hyper-parameter Value
Dataset Size 36,000 SGD Batch Size 32 SGD Epoch 200000 Learning Rate 0.0001
Table 8: CPO
Table 10: HG-DAgger
Hyper-parameter Value
Initializing dataset size 30K Number of data aggregation epoch 4 Interactions per round 5000 SGD batch size 256 Learning rate 0.0004
Table 11: IWR
Hyper-parameter Value
Initializing dataset size 30K Number of data aggregation epoch 4 Interactions per round 5000 SGD batch size 256 Learning rate 0.0004 Re-weight data distribution True | 1. What are the main contributions and strengths of the proposed algorithm for data-efficient human-in-the-loop learning?
2. How effective is the HACO learned policy in reducing the number of training timesteps required to reach basic competencies compared to the baseline methods?
3. Can you provide more details on who the experts are and their role in the training process?
4. Could you clarify the limitation regarding the implicit assumptions made by the method, specifically in using CQL as in equation (3)?
5. How might the method be extended to other environments with discrete action spaces?
6. Are there any plans to explore the use of this method in other environments beyond the driving simulator?
7. Would it be possible to include zoomed-in training curves in the appendix for better visualization?
8. What is the significance of the cosine similarity metric introduced in Section 3.3, and how does it relate to the multiple metrics of performance considered?
9. Could you elaborate on why IL is being outperformed with much less data, and what intuition can be gained from the results?
10. Are there any potential issues with the naming conventions used for different costs and rewards, and how might they be disambiguated or renamed for clarity? | Summary Of The Paper
Review | Summary Of The Paper
In this work, the authors propose a new algorithm for data-efficient human-in-the-loop learning, Human-AI Copilot Optimization. The main idea is to have experts intervene during training in cases in which unsafe situations arise. The HACO learned policy utilizes a multi-task objective: doing well relative to a learned value function (based on human interventions), keeping an exploratory policy, and keeping human interventions at a minimum. Experimentally, HACO seems to be able to drastically reduce the amount of number of environment training timesteps required to reach basic competencies to the agent in the test environment, while maintaining good task and safety performance.
Review
Quality
The paper's results are quite interesting, and the experiments – as far as I can tell – seem well executed. I particularly appreciated the additional ablations provided in Table 2. The method seems relatively well motivated. The method proposed seems to do very well relative to the chosen baselines along the performance metrics examined.
Clarity & Limitations
Stylistically the paper could use more editing. There are various typos (some below) and many phrasings that could be improved.
There are also various things that could be improved in the results' clarity, or which seem to constitute limitations:
Who is intervening? 3 experts – how were they selected? While the training is happening, what are the experts doing? "The whole training takes about 50 minutes." Is the expert providing actions for the whole time?
"The main experiments of HACO repeat 3 times." -> what does this mean? With each human expert, you perform one training run?
Generally would be helpful to give more intuition as to why IL is being outperformed with much less data?
"We split the driving scenes into the training set and test set with 50 different scenes in each set. At the beginning of each episode, a scene in the training or test set is randomly selected." -> The latter part of this phrase seems to suggest that you're training on the test set?
I found it confusing how in Section 3.3, different costs and rewards were defined one by one, without a high-level motivation. Before describing each one, I would state clearly that you are considering various different metrics of performance: "Test Return", "Test Cost" (which is different than just negative returns!), and "Test Success Rate". I would potentially propose changing the name from "Cost" to "Safety Violations", and from "Success Rate" to "Goal completion" to make it easier to disambiguate what the motivation between using these multiple metrics is. As currently phrased, they all seem synonymous with "reward" (or the opposite of it) on a first read.
In Section 3.3, I would switch to introduce cosine similarity first, and then say that in ideal case this is not necessary. I found presenting things in the current order more confusing.
The sentence "Frequent querying might exhaust the human expert and brings tremendous cognitive costs (Zhang et al., 2021)" is almost verbatim repeated both in section 3 and in section 2. I would remove it from 2.
"Under the protection from the human expert, HACO yields only 30.14 total training cost in the whole training process, an order of magnitude better than the other baselines," -> if I'm reading Table 1 correctly, it's essentially 2 orders of magnitude better? The next best value seems to be 1840
Figure 3: it might be useful to have zoomed in (on the x axis) training curves in the appendix. Are the shadings of the training curves standard errors?
A limitation seems to be that of implicit assumptions made by the method: using CQL as in eq. (3) is making relatively strong assumptions about the nature of expert intervention. While it seems to work well in practice, the form of eq. (6) also seems quite arbitrary, and it's unclear how one would extend it to other environments (e.g. with discrete action spaces).
Additionally, the lack of human-in-the-loop baselines seems potentially problematic – after all, the method is presented as a improvement in that line of work. The same could be said about the environment – given how promising this method seems to be in this context, it would be very interesting to see how it performs in other environments. While I realize that this is not something that can be addressed in the timeframe of rebuttals, it would significantly strengthen the paper.
Typos:
"To encourage the exploration in the area that does not violating human intention,"
"Human-Gated DAgger (HG-DAgger) (Kelly et al., 2019) utilizes an expert to intervene exploration and"
"Other forms of human participation includes providing human preferences"
"rewarding the state-action pairs deduced by human and penalizing those by agent."
"The speed reward is vehicle’s current speed"
"Since we test different approaches in the driving simulator, no injury would happens."
Originality
Although I'm not an expert in this area, as far as I know, this seems like a novel contribution. |
ICLR | Title
Efficient Learning of Safe Driving Policy via Human-AI Copilot Optimization
Abstract
Human intervention is an effective way to inject human knowledge into the loop of reinforcement learning, bringing fast learning and training safety. But given the very limited budget of human intervention, it is challenging to design when and how human expert interacts with the learning agent in the training. In this work, we develop a novel human-in-the-loop learning method called Human-AI Copilot Optimization (HACO). To allow the agent’s sufficient exploration in the risky environments while ensuring the training safety, the human expert can take over the control and demonstrate to the agent how to avoid probably dangerous situations or trivial behaviors. The proposed HACO then effectively utilizes the data collected both from the trial-and-error exploration and human’s partial demonstration to train a high-performing agent. HACO extracts proxy state-action values from partial human demonstration and optimizes the agent to improve the proxy values while reducing the human interventions. No environmental reward is required in HACO. The experiments show that HACO achieves a substantially high sample efficiency in the safe driving benchmark. It can train agents to drive in unseen traffic scenes with a handful of human intervention budget and achieve high safety and generalizability, outperforming both reinforcement learning and imitation learning baselines with a large margin. Code and demo videos are available at: https://decisionforce.github.io/HACO/.
1 INTRODUCTION
How to effectively inject human knowledge into the learning process is one of the key challenges to training reliable autonomous agents in safety-critical applications. In reinforcement learning (RL), researchers can inject their intentions into the carefully designed reward function. The learning agent freely explores the environment to collect the data and develops the desired behaviors induced by the reward function. However, RL methods bear two drawbacks that limit their applications in safety-critical tasks: First, the nature of trial-and-error exploration exposes RL agent to dangerous situations (Saunders et al., 2017). Second, it is difficult to summarize all the intended behaviors to be learned into the reward function. Taking the driving vehicle as an example, an ideal policy should obtain a set of skills, such as overtaking, yielding, emergent stopping, and negotiation with other vehicles. It is intractable to manually design a reward function that leads to the emergence of all those behaviors in the trained agent. To mitigate these two challenges, practitioners enforce the human intentions through imitation learning (IL) where the agent is trained to imitate the expert-generated state and action sequences. During the demonstration, the premature agent does not interact with the risky environment and thus the training safety is ensured. High-quality expert demonstrations provide direct the optimal solution for the agent to imitate from. However, IL paradigm suffers from the distributional shift problem (Ross & Bagnell, 2010; Ross et al., 2011) while the induced skills are not sufficiently robust with respect to changes in the control task (Camacho & Michie, 1995).
Different from vanilla RL or IL, human-in-the-loop learning is an alternative paradigm to inject human knowledge, where a human subject accompanies the agent and oversees its learning process. Previous works require the human to either passively advise which action is good (Mandel et al., 2017) or evaluate the collected trajectories (Christiano et al., 2017; Guan et al., 2021; Reddy et al.,
∗Quanyi Li and Zhenghao Peng contribute equally to this work.
2018; Warnell et al., 2018; Christiano et al., 2017; Sadigh et al., 2017; Palan et al., 2019). This kind of passive human involvement exposes the human-AI system to risks since the agent explores the environment without protection. Some other works require the human to merely intervene in the exploration by terminating the episode (Saunders et al., 2018; Zhang & Cho, 2016), but it is not practical to terminate and reset the environment instantly in the real world (Xu et al., 2020). Intervening and taking over the control from the learning agent is a natural approach to safeguard the human-AI system (Kelly et al., 2019; Spencer et al., 2020). However, a challenge exhibited in previous works is the budget of human intervention. Since human cognitive resource is precious and limited, it is essential to carefully design when and how the human expert involves in the learning process so that the human knowledge can be injected effectively.
In this work, we propose an efficient human-in-the-loop learning method called Human-AI Copilot Optimization (HACO). The key feature of HACO is that it can learn to minimize the human intervention and adjust the level of automation to the learning agent adaptively during the training. As shown in Figure 1 A, HACO allows the human expert to take over the human-AI system in a proactive manner. If the human decides to intervene in the action of the agent, he/she should demonstrate the correct actions to overcome current undesired situations to the learning agent. The human intervention and the partial demonstration are two sources of informative training data. We use offline RL technique to maintain a proxy value function of the human-AI mixed behavior policy even though the agent doesn’t have the access to the environmental reward during training. To encourage the exploration in the state-action space permitted by human, we also maximize the entropy of action distribution of the agent if the agent is not taken over.
Experiments in the virtual driving environments MetaDrive (Li et al., 2021) and CARLA (Dosovitskiy et al., 2017) show that, with an economic human budget, HACO outperforms RL and IL baselines with a substantial margin in terms of sample efficiency, performance, safety, and generalizability in the unseen testing environment. Thus the human-AI copilot optimization is an efficient learning paradigm to inject human knowledge in an online setting.
2 RELATED WORK
Learning from Demonstration. Passive imitation learning such as behavior cloning (Widrow, 1964; Osa et al., 2018; Huang et al., 2020; Sun et al., 2020) and recently proposed offline RL methods (Kumar et al., 2020; Fujimoto et al., 2018; Wu et al., 2019) train agents from an outof-the-shelf data set and guarantee the training safety, since no interaction with the environment is needed. Inverse RL methods (Ng et al., 2000; Abbeel & Ng, 2004; Fu et al., 2017; Bloem & Bambos, 2014) learn a reward function from the human demonstration and then use it to incentivize the agents to master the intended behaviors. Proposed more recently, GAIL (Ho & Ermon, 2016) and its variants (Song et al., 2018; Sasaki et al., 2018; Kostrikov et al., 2018) and SQIL (Reddy et al., 2019) compare the trajectory similarity between agents and humans and thus require the agent to interact with the environment. Similar to RL methods, this paradigm exposes the agent to potentially dangerous situations.
Human-in-the-loop Learning Methods. Many works focus on incorporating human in the training loop of conventional RL or IL paradigms. DAgger (Ross et al., 2011) and its extended methods (Kelly et al., 2019; Zhang & Cho, 2016; Hoque et al., 2021) correct the compounding error (Ross & Bagnell, 2010) of behavior cloning by periodically requesting expert to provide more demonstration. Instead of proving demonstration upon requests, Human-Gated DAgger (HG-DAgger) (Kelly et al., 2019), Expert Intervention Learning (EIL) (Spencer et al., 2020) and Intervention Weighted Regression (IWR) (Mandlekar et al., 2020) empower the expert to intervene exploration and carry the agent to safe states. However, these methods do not impose constraints to reduce human intervention and do not utilize the data from the free exploration of the agent. Human subjects can also involve in the loop providing preferences based on evaluative feedback on two behavior sequences generated by the agent (Christiano et al., 2017; Sadigh et al., 2017; Palan et al., 2019; Ibarz et al., 2018; Cui & Niekum, 2018).
Human-AI copilot or shared autonomy is a more intimate form of the human-in-the-loop methods. The AI agent and human are working together simultaneously to achieve a common goal. By giving human guidance and feedback at run-time instantly, the explorable state and action spaces can be greatly narrowed down (Saunders et al., 2018). The learning goal can further match the task objective by providing extra human feedback combined with reward function (Reddy et al., 2018; Warnell et al., 2018; Wu et al., 2021; Cederborg et al., 2015; Arumugam et al., 2019). Human-AI copilot is helpful and practical when applying RL to real world tasks where safety constraints must be satisfied (Garcıa & Fernández, 2015; Amodei et al., 2016; Bharadhwaj et al., 2020; Alshiekh et al., 2018). In our previous work (Peng et al., 2021), we made attempt to develop a method called Expert-Guided Policy Optimization (EGPO) where a PPO expert policy is involved to monitor the learning agent. The difference can be summarized as twofold: (1) We substitute the expert with human and design special mechanism to mitigate the delayed feedback error; (2) Based on the comprehensive ablation study and prototyping, we remove redundant designs like takeover function and the need of reward function, making the proposed method simple yet effective.
Reducing human burden is a major challenge in human-in-the-loop methods. A feasible solution is to learn an intervention function that imitates human intervention signals and stops the catastrophic actions of agents (Kelly et al., 2019; Zhang & Cho, 2016; Saunders et al., 2017; Abel et al., 2017), which can relieve the mental stress of the human subject during training. In this work, we devise our learning scheme explicitly to include the human cognitive cost as one of the objectives to minimize.
3 HUMAN-AI COPILOT OPTIMIZATION
In this section, we introduce Human-AI Copilot Optimization (HACO), an efficient learning algorithm that trains agents from human interventions, partial demonstrations and free exploration. For human-in-the-loop learning, it is essential to design when and how to engage human subjects. The major issue is the cognitive cost of the human subject (Zhang et al., 2021). Frequent querying might bring tremendous cognitive cost and exhaust the human expert, causing incorrect or delayed feedback that hinders the training. Thus the proposed pipeline aims to minimize the human intervention cost during the training, which reduces the reliance on the expert’s demonstration over time and improves the learning agent’s autonomy. The overall workflow of HACO is presented in Algorithm 1.
3.1 HUMAN-AI COPILOT TRAINING PARADIGM
We aim to learn an autonomous agent with policy πn(an|s) that can make informed action an in state s. As shown in Fig. 1, we frame the human-AI copilot paradigm that extends the standard reinforcement learning diagram by incorporating a human expert. At each step, the human expert oversees current state and decides whether to intervene. If necessary, he/she will execute human action ah to overwrite the agent’s action an. We denote the human intervention by a Boolean indicator I(s, an) and thus the action applied to the environment is called the safe action â = I(s, an)ah + (1 − I(s, an))an. Denoting the human policy as πh, the actual trajectories occurred during training are derived from a shared behavior policy πb:
πb(a|s) = πn(a|s)(1− I(s, a)) + πh(a|s)G(s), (1) wherein G(s) = ∫ a′∈A I(s, a ′)πn(a ′|s)da′ is the probability of the agent choosing an action that will be rejected by the human.
We call the transition sequences during the takeover {(st, an,t, ah,t, I(st, an,t), st+1), ...} as the partial demonstration. The partial demonstration and the free exploration transitions will be recorded in the replay buffer B and fed to the training pipeline. Note that we do not require to store environmental reward and cost into the buffer since the proposed method does not need them.
In the human-AI copilot training, the human is obligated to guide the agent learning and safeguard the learning process by proactively taking over the control if necessary. This paradigm rules out the dispensable states and mitigates the safety concern in free exploration of RL and active imitation learning methods (Ross et al., 2011). Different from previous offline RL works training from fixed dataset (Bojarski et al., 2016; Ho & Ermon, 2016; Reddy et al., 2019; Kumar et al., 2020; Fujimoto et al., 2018; Wu et al., 2019) where no closed loop feedback is accessible, the human-AI copilot training produces partial demonstrations that contains the necessary human knowledge to overcome dangerous situations into the learning. The copilot nature alleviates the distributional shift problem, since the human intervenes when the agent performs suspicious behaviors, so that there is a continuity of the state visitation between the agent and the expert.
In next section, we will introduce how we instantiate the human-AI copilot paradigm with a humanefficient algorithm that can effectively optimize the agent toward safe and high-performing policy.
3.2 LEARNING OBJECTIVES
We form three objectives that fully utilize the human data: (1) Agent should maximize a proxy value functionQ(s, a) which reflects human intentions on how to finish the task. (2) Agent should explore thoroughly to visit the state-action subspace permitted by the human. Concretely, we maximize the action distribution entropyH(π(·|s)). (3) Agent should maximize the level of automation and reduce human intervention. Episodic human intervention is estimated by an intervention value function QI(s, a) based on the step-wise intervention cost C(s, a). Thus the overall learning objective of HACO becomes:
max π
E[Q(s, a) +H(π)−QI(s, a)]. (2)
We then discuss the practical implementation of aforementioned design goals.
Proxy value function. HACO follows reward-free setting so we can’t estimate the expected stateaction value based on a ground-truth reward function defined by the environment. We instead estimate a proxy value function Q(s, a;φ) (φ is model parameters) that captures the ordinal preference of human experts, which implicitly reflects human intentions. We utilize the conservative Q-learning (Kumar et al., 2020) and form the optimization problem of the proxy value function as:
min φ E (s,an,ah,I(s,an))∼B [I(s, an)(Q(s, an;φ)−Q(s, ah;φ))]. (3)
The above optimization objective can be interpreted as being optimistic to the human’s action ah and pessimistic to the agent’s action an. The proxy value function learns to represent the high-value state-action subspace preferred by the human expert.
Entropy regularization. If the learning agent visits human-preferable subspace insufficiently during free explorable sampling, the states evoking high proxy value are rarely encountered, making the back-propagation of the proxy value to preceding states difficult and thus damaging the learning. To encourage exploration, we adopt the entropy regularization technique in (Haarnoja et al., 2018) and forms auxiliary signal to update the proxy value function apart from Eq. 3:
min φ E (st,ât,st+1)∼B [y −Q(st, ât;φ)]2, y = γ E a′∼πn(·|st+1) [Q(st+1, a ′;φ′)− α log πn(a′|st+1)],
(4) wherein ât is the executed action at state st, φ′ denotes the delay updated parameter of the target network, γ is the discount factor. Since the environment reward is not accessible to HACO, we remove the reward term in the update target y. Combining Eq. 3 and Eq. 4, the formal optimization objective of the proxy value function becomes:
min φ E B [(y −Q(st, ât;φ))2 + I(st, an,t)(Q(st, an,t;φ)−Q(st, ah,t;φ))]. (5)
Algorithm 1: The workflow of HACO during training 1 Initialize an empty replay buffer B 2 while Training is not finished do 3 while Episode is not terminated do 4 an,t ∼ πn(·|st) Retrieve agent’s action 5 I(st, an,t)← Human expert decides whether to intervene by observing current state st 6 if I(st, an,t) is True then 7 ah,t ← πh(·|st) Retrieve human’s action 8 Apply ah,t to the environment 9 else
10 Apply an,t to the environment 11 if I(st, an,t) is True and I(st−1, an,t−1) is False then 12 C(st, an,t)← Compute intervention cost following Eq. 6 13 else 14 C(st, an,t)← 0 Set intervention cost to zero 15 Record st, an,t, I(st, an,t) and ah,t (if I(st, an,t)) to the buffer B 16 Update proxy value Q, intervention value QI and policy π according to Eq. 5, Eq. 7, Eq. 8
respectively
Reducing human interventions. Directly optimizing the agent policy according to the proxy value function will lead to failure when evaluating the agent without human participation. This is because Q(s, a) represents the proxy value of the mixed behavior policy πb instead of the learning agent’s πn due to the existence of human intervention. It is possible that the agent learns to deliberately abuse human intervention by always taking actions that violate human intentions, such as driving off the road when near the boundary, which forces human to take over and provide demonstrations. In this case, the level of automation for the agent is low and the human subject exhausts to provide demonstrations. Ablation study result in Table 2(c) illustrates this phenomenon.
To economically utilize human budget and reduce the human interventions over time, we punish the agent action that triggers human intervention in a mild manner by using the cosine similarity between agent’s action and human’s action as the intervention cost function in the form below:
C(s, an) = 1− an
Tah
||an||||ah|| , ah ∼ πh(·|s). (6)
The agent will receive large penalty only when its action is significantly different from the expert action in terms of cosine similarity.
A straightforward form of C is a constant +1 when human expert issues intervention. However, we find that there usually exists temporal mismatch between human intervention and faulty actions so that the intervention cost is given to the agent at a delayed time step t + . It is possible that the agent’s action an,t+ is a correct action that saves the agent itself from dangers but is mistakenly marked as faulty action that triggers human intervention. In the ablation study, we find that using the constant cost raises inferior performance compared to the cosine similarity.
As shown in Line 11-14 of Algorithm 1, we only yield non-zero intervention cost at the first step of human intervention. This is because the human intervention triggered by the exact action an,t indicates this action violates the underlying intention of human at this moment. Minimizing the chance of those actions will increase the level of automation.
To improve the level of automation, we form an additional intervention value function QI(s, a) as the expected cumulative intervention cost, similar to estimating the state-action value in Q-learning through Bellman equation:
QI(st, an,t) = C(st, an,t) + γ E st+1∼B,at+1∼πn(·|st+1) [QI(st+1, at+1)]. (7)
This value function is used to directly optimize the policy.
Learning policy. Using the entropy-regularized proxy value function Q(s, a) as well as the intervention value function QI(s, a), we form the the policy improvement objective as:
max θ E st∼B
[Q(st, an)− α log πn(an|st; θ)−QI(st, an)], an ∼ πn(·|st; θ). (8)
4 EXPERIMENTS
4.1 EXPERIMENTAL SETTINGS
Task. We focus on the driving task in this work. This is because driving is an important decision making problem with a huge social impact, where safety and training efficiency are critical. Since many researches on autonomous driving employ human in a real vehicle (Bojarski et al., 2016; Kelly et al., 2019), the human safety and human cognitive cost become practical challenges that limit the application of learning-based methods in industries. Therefore, the driving task is an ideal benchmark for the human-AI copilot paradigm.
Simulator. Considering the potential risks of employing human subjects in physical experiments, we benchmark different approaches in the driving simulator. We employ a lightweight driving simulator MetaDrive (Li et al., 2021), which preserves the capacity to evaluate the safety and generalizability in unseen environments. The simulator is implemented based on Panda3D (Goslin & Mine, 2004) and Bullet Engine that has high efficiency as well as accurate physics-based 3D kinetics. MetaDrive uses procedural generation to synthesize an unlimited number of driving maps for the split of training and test sets, which is useful to benchmark the generalization capability of different approaches in the context of safe driving. Some generated driving scenes are presented in Fig. 2. The simulator is also extremely efficient and flexible so that we can run the human-AI copilot experiment in real-time. Though we mainly describe the setting of MetaDrive in this section, we also experiment on CARLA (Dosovitskiy et al., 2017) simulator in Sec. 4.3.
Training Environment. In the simulator, the task for the agent is to steer the target vehicle with low-level control signal, namely acceleration, brake and steering, to reach the predefined destination and receive a success flag. The ratio of episodes where the agent successfully reaches the destination is called the success rate. To increase the difficulty of the task, we scatter obstacles randomly in each driving scene such as movable traffic vehicles, fixed traffic cones, and warning triangles.
The observation contains (1) the current states such as the steering, heading, velocity and relative distance to boundaries etc., (2) the navigation information that guides the vehicle toward the destination, and (3) the surrounding information encoded by a vector of 240 Lidar-like distance measures of the nearby vehicles.
Though HACO does not receive environmental reward during training, we provide reward function to train baseline methods and evaluate HACO in test time. The reward function contains a dense driving reward, speed reward and a sparse terminal reward. The driving reward measures the longitudinal movement toward destination. We also reward agent according to its velocity and give the sparse reward +20 when the agent arrives at the destination.
Each collision to the traffic vehicles or obstacles yields +1 environmental cost. Note that HACO can not access this cost during the training. This cost is used to train safe RL baselines as well as for testing the safety of trained policies. We term the episodic cost as safety violation which is the measurement on the safety of a policy.
We invite the human expert to supervise the real-time exploration of the learning agent with hands on the steering wheel, as shown in the Fig. 1B. When a dangerous situation is going to happen, the human takes over the vehicle by pressing the paddle besides the wheel and starts controlling the vehicle by steering the wheel and stepping the pedals.
Split of training and test sets. Different from the conventional RL setting where the agent is trained and tested in the same fixed environment, we focus on evaluating the generalization performance through testing the trained agents in separated test environments. We split the driving scenes into the training set and test set with 50 different scenes in each set. After each training iteration, we roll out the learning agent without guardian in the test environments and record success rate and safety violation given by the environment and present it in Table 1.
* During HACO training, in 8316 ± 497.90 steps out of the total 30K steps the human expert intervenes and overwrites the agent’s actions. The whole training takes about 50 minutes.
Implementation details. We conduct experiments on the driving simulator and implement algorithms using RLLib (Liang et al., 2018), an efficient distributed learning system. When training the baselines, we host 8 concurrent trials in an Nvidia GeForce RTX 2080 Ti GPU. Each trial consumes 2 CPUs with 8 parallel rollout workers. Except human-in-the-loop experiments, all baseline experiments are repeated 5 times with different random seeds. The main experiments of HACO is conducted on a local computer with an Nvidia GeForce RTX 2070 and repeat 3 times. The ablations and baseline human-in-the-loop experiments repeat once due to the limited human budget. One human subject participates in each experiment. In all tables and figures, we provide the standard deviation if the experiments are repeated multiple runs with different random seeds. Information about other hyper-parameters is given in the Appendix.
4.2 BASELINE COMPARISON
We compare our method to vanilla RL and Safe RL methods which inject the human intention and constraint through the pre-defined reward function and cost function. We test native RL methods, PPO (Schulman et al., 2017) and SAC (Haarnoja et al., 2018), with cost added to the reward as auxiliary negative reward, called reward shaping (RS). Three common safe RL baselines Constraint Policy Optimization (CPO) (Achiam et al., 2017), PPO-Lagrangian (Stooke et al., 2020), SACLagrangian (Ha et al., 2020) are evaluated.
Apart from the RL methods, we also generate a human demonstration dataset containing one-hour expert’s demonstrations where there are about 36K transitions in the training environments. For the high-quality demonstrations in the dataset, the success rate of the episodes reaches 98% and the safety violation is down to 0.16. Using this dataset, we evaluate passive IL method Behavior Cloning, active IL method GAIL (Ho & Ermon, 2016) and offline RL method CQL (Kumar et al., 2020). We also run the Human-Gated DAgger (HG-DAgger) (Kelly et al., 2019) and Intervention Weighted Regression (IWR) (Mandlekar et al., 2020) as the baselines of human-in-the-loop methods based on this dataset and the human-AI copilot workflow.
Training-time Safety. The training-time safety is measured by the total training safety violation, the total number of critical failures occurring in the training. Note that the environmental cost here is different from the human intervention cost in HACO. As illustrated in Table 1 and Fig. 3A, HACO achieves huge success in training time safety. Apart from the empirical results, we provide proof to show the training safety can be bound by the guardian in Appendix. Under the protection of the human expert, HACO yields only 30.14 total safety violations in the whole training process, two orders of magnitude better than other RL baselines, even though HACO does not access the environmental cost. IWR and HG-DAgger also achieve drastically lower training safety violations, showing the power of human-in-the-loop methods. The most competitive RL baseline SAC-RS, which achieves similar test success rate, causes averagely 2767.77 training safety violations which are much higher
than HACO. The active IL method GAIL also has significantly higher safety violations than HACO and its performance is unsatisfactory.
From the perspective of safety, we find that the reward shaping technique is inferior compared to the Lagrangian method, both for SAC and PPO variants. PPO causes more violations than SAC, probably due to the relatively lower sample efficiency and slower convergence speed.
Sample Efficiency and Human Cognitive Cost. The human-AI system is not only protected so well by the human, but achieves superior sample efficiency with limited data usage. As shown in Fig. 3A and Table 1, we find that HACO is an order of magnitude more efficient than RL baselines. HACO achieves 0.83 test success rate by merely interacting with the environment in the 30K steps, wherein only averagely 8,316 steps the human provides safe actions as demonstration. During nearly 50 minutes of human-AI copilot, there are only 27% steps that the human provides demonstrations.
Human-in-the-loop baselines IWR and HG-DAgger consume 50K steps of human budget and only IWR can achieve satisfactory success rate. By prioritizing samples from human intervention, IWR manages to learn key actions from human intervention to escape dangerous situations caused by the compounding error. Without re-weighting the human takeover data, HG-Dagger fails to learn from a few but important human demonstrations. The learning curves of these two methods can be found in the Appendix.
Unlike the success of HACO, all the learning-from-demonstration methods fail with the dataset containing 36K transitions. Compared to IL methods which optimize agents to imitate exact actions at each time step, HACO considers the learning on the trajectory basis. We incentivize the agent to choose an action that can bring potential return in future trajectory, instead of only mimicking the expert’s behaviors at each step. On the other hand, HACO gathers expert data in an online manner through human-AI copilot, which better mitigates the distributional shift severe in offline RL methods.
Learning Dynamics. The intervention minimization mechanism in HACO reduces human cognitive cost. As shown in Fig. 3B, the takeover rate gradually decreases in the course of learning. The curve of episodic intervention cost suggests that the human intervention frequency becomes lower and the similarity between agent’s action and human’s action increases. We also provide visualization of the learned proxy value function in the Appendix, showing that the learning scheme of HACO can effectively encode human preference into the proxy values.
4.3 ABLATION STUDY
Takeover Policy Analysis. We request the human subjects to try two intervention strategies. The first is to take over in a low frequency and produce a long trajectory at each intervention. In this way the intervention cost becomes sparse. The other strategy is to intervene more frequently and provide fragmented demonstrations. In Table 2(a), the experiment shows that the proposed HACO works better with dense human intervention signals. Agent trained with long trajectories achieves inferior success rate and episodic reward than agents trained with dense intervention signals.
Cosine Similarity Cost Function. As shown in Table 2(b), we replace the intervention cost function in Eq. 6 to a constant value +1 if human intervention happens. We find the agent learns to stay in the spawn points and does not move at all in test time. As discussed in Sec. 3.2, it is possible that the human intervenes in incorrect timing. This makes agent fail to identify how to drive correctly. Using the negative cosine similarity to measure the divergence between agent and human’s actions alleviates this phenomenon since the human intervention penalty is down-weighted when the agent provides action that adheres human intention.
Intervention Minimization. As shown in Table 2(c), when removing the intervention minimization mechanism, the agent drives directly toward the boundary. This is because the agent learns to abuse human expert to take over all the time, which increases proxy values but causes consistent out-ofthe-road failures in testing. This result shows the importance of intervention minimization.
CARLA Experiment. To test the generality of HACO, we run HACO in the CARLA simulator (Dosovitskiy et al., 2017). We use the top-down semantic view provided by CARLA as the input and a 3-layer CNN as the feature extractor for HACO and the PPO baseline. For PPO, the reward follows the setting described in CARLA and is based on the velocity and the completion of the road. We train HACO (with a human expert) and PPO in CARLA town 1 and report the test performance in CARLA town 2. Table 3 shows that the proposed HACO can be successfully deployed in the CARLA simulator with visual observation and achieve comparable results. Also, it can train the driving agent with a new CNN feature-extractor in 10 minutes with only 8,000 samples in the environment. The video is available at: https://decisionforce.github.io/HACO/.
5 CONCLUSION
We develop an efficient human-in-the-loop learning method, Human-AI Copilot Optimization (HACO), which trains agents from the human interventions and partial demonstrations. The method incorporates the human expert in the interaction between agent and environment to ensure safe and efficient exploration. The experiments on safe driving show that the proposed method achieves superior training-time safety, outperforming RL and IL baselines. Besides, it shows a high sample efficiency for rapid learning. The constrained optimization technique is used to prevent the agent from excessively exploiting the human expert, which also decreases the takeover frequency and saves valuable human budget.
One limitation of this work is that the trained agents behave conservatively compared to the agents from RL baselines. Aiming to ensure the training time safety of the copilot system, human expert typically slow the vehicle down to rescue it from risky situations. This makes the agent tend to drive slowly and exhibit behaviors such as frequent yielding in the intersection. In future work, we will explore the possibility of learning more sophisticated skills.
Acknowledgments This project was supported by the Centre for Perceptual and Interactive Intelligence (CPII) Ltd under InnoHK supported by the Innovation and Technology Commission.
ETHICS STATEMENT
The proposed Human-AI Copilot Optimization algorithm aims at developing a new human-friendly human-in-the-loop training framework. We successfully increase the level of automation after human-efficient training. We believe this work has a great positive social impact which advances the development of more intelligent AI systems that costs less human burdens.
We employ human subjects to participate in the experiments. Human subjects can stop the experiment if any discomfort happens. No human subjects were harmed in the experiments since we test in the driving simulator. The human subjects earn an hourly salary more than average in our community. Each experiment lasts near one hour. Human participants will rest at least three hours after one experiment. During training and data processing, no personal information is revealed in the collected dataset or the trained agents.
A MAIN THEOREM AND THE PROOF
In this section, we derive the upper bound of the discounted probability of failure of HACO, showing that we can bound the training safety with the guardian. Theorem 1 (Upper bound of training risk). The expected cumulative probability of failure Vπb of the behavior policy πb in HACO is bounded by the error rate of the human expert action , the error rate of the human expert intervention κ and the tolerance of the human expert K ′:
Vπb ≤ 1
1− γ [ + κ+
γ 2
1− γ K ′],
wherein K ′ = maxsK(s) = maxs ∫ a∈Ah(s) da ≥ 0 is called human expert tolerance.
The human expert tolerance K ′ will becomes larger, if human relieves its intervention and allows the agent to explore the environment more freely.
The proof is given as follows.
Notations. Before starting, we firstly recap and describe the notations. In HACO, a human subject copilots with the learning agent. The agent’s policy is πn, the human’s policy is πh. Both policies produces action in the bounded action space A ∈ R|A|. The human expert decides to intervention under certain state and agent’s action an. The human intervention is denoted by a Boolean function: I(s, a). The mixed behavior policy πb that produces the real actions applied to the environment is denoted as: πb(a|s) = πn(a|s)(1− I(s, a)) + πh(a|s)G(s), (9) wherein G(s) = ∫ a′∈A I(s, a ′)πn(a ′|s)da′ is a function which denotes the probability of choosing an action that will be rejected by the human.
Therefore, at a given state, we can split the action space into two parts: where intervention will happen or will not happen if the agent sample action in it. We denote the confident action space as:
Ah(s) = {a : I(a|s) is False}. (10) The confident action space contains the actions that will not be rejected by human expert at state s.
We also define the ground-truth indicator Cgt denoting whether the action will lead to unsafe state. This unsafe state is determined by the environment and is not revealed to learning algorithm:
Cgt(s, a) = { 1, if s′ = P(s′|s, a) is an unsafe state, 0, otherwise.
(11)
Therefore, at a given state s the step-wise probability of failure for arbitrary policy π is:
E a∼π(·|s)
Cgt(s, a) ∈ [0, 1]. (12)
Now we denote the cumulative discounted probability of failure as:
Vπ(st) = E τ∼π ∑ t′=t γt ′−tCgt(st′ , at′), (13)
which counts for the chance of entering dangerous states in current time step as well as in future trajectories deduced by the policy π. We use Vπh = Eτ∼πh Vπh(s0) to denote the expected cumulative discounted probability of failure of the human. Following the same definition as Vπh , we can also write the expected cumulative discounted probability of failure of the behavior policy as: Vπb = Eτ∼πb Vπb(s0) = Eπb ∑ t=0 γ tCgt(st, at).
Assumption. Now we introduce two important assumptions on the human expert.
Assumption 1 (Error rate of human action). For all states, the step-wise probability of that the human expert produces an unsafe action is bounded by a small value < 1:
E a∼πh(·|s)
Cgt(s, a) ≤ . (14)
Assumption 2 (Error rate of human intervention). For all states, the step-wise probability of that the human expert does not intervene when agent produces an unsafe action is bounded by a small value κ < 1: ∫
a∈A
[1− I(s, a)]Cgt(s, a)da = ∫
a∈Ah(s)
Cgt(s, a)da ≤ κ. (15)
These two assumptions does not impose any constrain on the structure of the human expert policy.
Lemmas. We propose several useful lemmas and the correspondent proofs, which are used in the main theorem.
Lemma 2 (The performance difference lemma).
Vπb = Vπh + 1
1− γ Es∼Pπb E a∼πb [Aπh(s, a)]. (16)
Here the Pπb means the states are subject to the marginal state distribution deduced by the behavior policy πb. Aπh(s, a) is the advantage of the expert in current state action pair: Aπh(s, a) = Cgt(s, a) + γVπh(s
′) − Vπh(s) and s′ = P(s, a) is the next state. This lemma is proposed and proved by Kakade & Langford (2002) and is useful to show the behavior policy’s safety. In the original proposition, the V and A represents the expected discounted return and advantage w.r.t. the reward, respectively. However, we replace the reward with the indicator Cgt so that the value function Vπb and Vπh presenting the expected cumulative failure probability.
Lemma 3. The cumulative probability of failure of the expert Vπh(s) is bounded for all state:
Vπh(s) ≤
1− γ
Proof. Following Assumption 1:
Vπh(st) = E πh [ ∞∑ t′=t γt ′−tCgt(st′ , at′)] = ∞∑ t′=t γt ′−t E πh [Cgt(st′ , at′)] ≤ ∞∑ t′=t γt ′−t = 1− γ (17)
Theorem. We introduce the main theorem of this work above, which shows that the training safety is related to the error rate on action and the error rate on intervention κ of the human expert. The proof is given as follows.
Proof. We firstly decompose the advantage by splitting the behavior policy:
E a∼πb(·|s) Aπh(s, a) = ∫ a∈A πb(a|s)Aπh(s, a)
= ∫ a∈A {πn(a|s)(1− I(s, a))Aπh(s, a) + πh(a|s)G(s)Aπh(s, a)}da
= ∫ a∈Ah(s) [πn(a|s)Aπh(s, a)]da+G(s) E a∼πh [Aπh(s, a)].
(18)
The second term is equal to zero according to the definition of advantage. We only need to compute the first term. We expand the advantage into detailed form, we have:
E a∼πb(·|s) Aπh(s, a) = ∫ a∈Ah(s) [πn(a|s)Aπh(s, a)]da
= ∫ a∈Ah(s) πn(a|s)[Cgt(s, a) + γVπh(s′)− Vπh(s)]da
= ∫ a∈Ah(s) π(a|s)Cgt(s, a)da
︸ ︷︷ ︸ (a)
+ γ ∫ a∈Ah(s) π(a|s)Vπh(s′)da
︸ ︷︷ ︸ (b)
− ∫
a∈Ah(s)
π(a|s)Vπh(s)da
︸ ︷︷ ︸ (c)
.
(19)
Following the Assumption 1, the term (a) can be bounded as:
∫ a∈Ah(s) π(a|s)Cgt(s, a)da ≤ ∫ a∈Ah(s) Cgt(s, a)da ≤ κ. (20)
Following the Lemma 3, the term (b) can be written as:
γ ∫ a∈Ah(s) π(a|s)Vπh(s′)da ≤ γ ∫ a∈Ah(s) Vπh(s ′)da ≤ γ 1− γ ∫ a∈Ah(s) da = γ 1− γ K(s), (21)
wherein K(s) = ∫ a∈Ah(s) da denoting the area of human-preferable region in the action space. It is a function related to the human expert and state.
The term (c) is always non-negative, so after applying the minus to term (c) the negative term will always be ≤ 0. Aggregating the upper bounds of three terms, we have the bound on the advantage:
E a∼πb
Aπh(s, a) ≤ κ+ γ
1− γ K(s) (22)
Now we put Eq. 22 as well as Lemma 3 into the performance difference lemma (Lemma 2), we have:
Vπb = Vπh + 1
1− γ Es∼Pπb E a∼πb [Aπh(s, a)]
≤
1− γ +
1
1− γ [κ+
γ
1− γ max s K(s)]]
= 1
1− γ [ + κ+
γ 2
1− γ K ′],
(23)
wherein K ′ = maxsK(s) = maxs ∫ a∈Ah(s) da ≥ 0 is correlated to the tolerance of the expert. If the human expert has higher tolerance then K ′ should be greater.
Now we have proved the upper bound of the discounted probability of failure for the behavior policy in our method.
B VISUALIZATION OF LEARNED PROXY VALUE FUNCTION
To understand how well the proxy value function learns, we visualize 4 common scenarios in 4 pairs of figures as shown above. The left sub-figure of each pair shows a top-down view of a driving scenario, where a sequence of snapshots of the control vehicle is plotted, showing its trajectory. The right sub-figure of each pair overlaps the heatmap of proxy values in the top-down image. We manually position the vehicle in different location in the map and query the policy to get action and run the proxy Q function to get the value Q(s, a). Region in red color indicates the proxy value is low if the agent locates there and vice versa.
In Fig. 4(a), the agent performs a lane change behavior to avoid potential collisions with a traffic vehicle which is merging into the middle lane. The region near the traffic vehicle has extremely low values and thus the agent has small probability to enter this area.
In Fig. 4(b), traffic cones spread in the left lane. The agent learns to avoid crashes and the proxy value heatmap shows a large region of low values.
As shown in the trajectory in Fig. 4(c), though the agent can choose to bypass the traffic vehicle in both left-hand side or right-hand side, it chooses the right-hand side. The heatmap shows that much higher proxy Q value is produced on right bypassing path compared to left path. This behavior resembles the preference of human who prefers right-hand side detour.
In addition, in some ares where paths boundary is ambiguous such as the intersection, the agent manages to learn a virtual boundary in the proxy Q space for efficiently passing these areas, as shown in the Fig. 4(d).
The proxy Q value distribution shown in this section not only explains the avoidance behaviors, but also serves as a good indicator for the learned human preference.
C DETAILS OF HUMAN-IN-THE-LOOP BASELINES
We benchmark the performance of two human-in-the-loop methods HG-DAgger (Kelly et al., 2019) and IWR (Mandlekar et al., 2020). Both methods require warming up through behavior cloning on a pre-collected dataset. In practice, we find that using 10K or 20K steps of human collected data is not enough to initialize the policy with basic driving skills. Therefore, we use the pre-collected human dataset containing 30K transitions to warm up the policies. After warming up, HG-DAgger and IWR then aggregate human intervention data to the training buffer and conduct behavior cloning again to update policy for 4 epochs. In each epoch the human-AI system collects 5000 transitions. The above figure shows the learning curves of IWR and HG-DAgger. As discussed in the main body of paper, we credit the success of IWR to the re-weighting of human intervention data, which is not emphasized in HG-DAgger.
D MORE ZOOM-IN PLOT OF THE LEARNING CURVES
The above figures present the zoomed in learning curves of RL baselines and HACO, showing the superior sample efficiency of HACO compared to RL baselines.
E HYPER-PARAMETERS
Table 4: HACO
Table 5: PPO/PPO-Lag
Hyper-parameter Value
Discounted Factor γ 0.99 τ for target network update 0.005 Learning Rate 0.0001 Environmental horizon T 1500 Steps before Learning start 10000
Cost Limit for SAC-Lag 1
BC iterations for CQL 200000 CQL Loss Temperature β 5 Min Q Weight Multiplier 0.2
Table 7: BC
Hyper-parameter Value
Dataset Size 36,000 SGD Batch Size 32 SGD Epoch 200000 Learning Rate 0.0001
Table 8: CPO
Table 10: HG-DAgger
Hyper-parameter Value
Initializing dataset size 30K Number of data aggregation epoch 4 Interactions per round 5000 SGD batch size 256 Learning rate 0.0004
Table 11: IWR
Hyper-parameter Value
Initializing dataset size 30K Number of data aggregation epoch 4 Interactions per round 5000 SGD batch size 256 Learning rate 0.0004 Re-weight data distribution True | 1. What is the focus and contribution of the paper on human-in-the-loop reinforcement learning?
2. What are the strengths of the proposed HACO method, particularly in its comparison to baseline methods?
3. What is the reviewer's concern regarding the experimental comparison with prior work?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. Do you have any questions or suggestions regarding the paper's content or presentation? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes HACO, a human-in-the-loop reinforcement learning method that safely trains an agent to imitate expert behavior while minimizing the number of expert interventions required. The key idea is to have a human watch over the agent (e.g., in a simulated driving environment), and take control whenever the agent enters unsafe states. HACO uses offline RL to train the agent to imitate the actions taken by the human during these interventions. To discourage the agent from intentionally visiting unsafe states in order to trigger human interventions, HACO also assigns a negative reward to the state transitions preceding a human intervention. Experiments with human participants in a simulated driving task show that HACO trains the agent to achieve higher success rates (in a test environment, without a human in the loop) than baseline methods based on imitation learning and offline RL, while requiring less training data and incurring a lower cumulative training cost.
Review
Overall, I enjoyed reading this paper and found its results convincing. However, there is one missing experimental comparison that I would like to see before I can recommend an accept: prior work on human-in-the-loop imitation learning [2] proposes the Intervention-Weighted Regression (IWR) method, which tackles the same problem under the same assumptions as HACO. While HACO directly penalizes the agent for triggering human interventions, IWR takes a different approach based on dataset balancing. To judge the relative performance and contribution of HACO, it would be helpful to implement and run IWR on the driving task in Table 1 and Figures 3-4.
Missing related work:
Learning from Interventions
Human-in-the-Loop Imitation Learning using Remote Teleoperation
Minor comments:
Is there a typo in Equation 3? As written, the objective evaluates to zero. I'm also confused as to why offline RL is necessary to fit the Q-function, since we do not have a reward function in this setting and instead have partial demonstrations. Can't we just use behavioral cloning to fit an imitation policy?
Why does the training cost go up over time in the left panel of Figure 3?
Update
Thank you to the authors for adding the comparison to IWR. I have increased my score. |
ICLR | Title
Efficient Learning of Safe Driving Policy via Human-AI Copilot Optimization
Abstract
Human intervention is an effective way to inject human knowledge into the loop of reinforcement learning, bringing fast learning and training safety. But given the very limited budget of human intervention, it is challenging to design when and how human expert interacts with the learning agent in the training. In this work, we develop a novel human-in-the-loop learning method called Human-AI Copilot Optimization (HACO). To allow the agent’s sufficient exploration in the risky environments while ensuring the training safety, the human expert can take over the control and demonstrate to the agent how to avoid probably dangerous situations or trivial behaviors. The proposed HACO then effectively utilizes the data collected both from the trial-and-error exploration and human’s partial demonstration to train a high-performing agent. HACO extracts proxy state-action values from partial human demonstration and optimizes the agent to improve the proxy values while reducing the human interventions. No environmental reward is required in HACO. The experiments show that HACO achieves a substantially high sample efficiency in the safe driving benchmark. It can train agents to drive in unseen traffic scenes with a handful of human intervention budget and achieve high safety and generalizability, outperforming both reinforcement learning and imitation learning baselines with a large margin. Code and demo videos are available at: https://decisionforce.github.io/HACO/.
1 INTRODUCTION
How to effectively inject human knowledge into the learning process is one of the key challenges to training reliable autonomous agents in safety-critical applications. In reinforcement learning (RL), researchers can inject their intentions into the carefully designed reward function. The learning agent freely explores the environment to collect the data and develops the desired behaviors induced by the reward function. However, RL methods bear two drawbacks that limit their applications in safety-critical tasks: First, the nature of trial-and-error exploration exposes RL agent to dangerous situations (Saunders et al., 2017). Second, it is difficult to summarize all the intended behaviors to be learned into the reward function. Taking the driving vehicle as an example, an ideal policy should obtain a set of skills, such as overtaking, yielding, emergent stopping, and negotiation with other vehicles. It is intractable to manually design a reward function that leads to the emergence of all those behaviors in the trained agent. To mitigate these two challenges, practitioners enforce the human intentions through imitation learning (IL) where the agent is trained to imitate the expert-generated state and action sequences. During the demonstration, the premature agent does not interact with the risky environment and thus the training safety is ensured. High-quality expert demonstrations provide direct the optimal solution for the agent to imitate from. However, IL paradigm suffers from the distributional shift problem (Ross & Bagnell, 2010; Ross et al., 2011) while the induced skills are not sufficiently robust with respect to changes in the control task (Camacho & Michie, 1995).
Different from vanilla RL or IL, human-in-the-loop learning is an alternative paradigm to inject human knowledge, where a human subject accompanies the agent and oversees its learning process. Previous works require the human to either passively advise which action is good (Mandel et al., 2017) or evaluate the collected trajectories (Christiano et al., 2017; Guan et al., 2021; Reddy et al.,
∗Quanyi Li and Zhenghao Peng contribute equally to this work.
2018; Warnell et al., 2018; Christiano et al., 2017; Sadigh et al., 2017; Palan et al., 2019). This kind of passive human involvement exposes the human-AI system to risks since the agent explores the environment without protection. Some other works require the human to merely intervene in the exploration by terminating the episode (Saunders et al., 2018; Zhang & Cho, 2016), but it is not practical to terminate and reset the environment instantly in the real world (Xu et al., 2020). Intervening and taking over the control from the learning agent is a natural approach to safeguard the human-AI system (Kelly et al., 2019; Spencer et al., 2020). However, a challenge exhibited in previous works is the budget of human intervention. Since human cognitive resource is precious and limited, it is essential to carefully design when and how the human expert involves in the learning process so that the human knowledge can be injected effectively.
In this work, we propose an efficient human-in-the-loop learning method called Human-AI Copilot Optimization (HACO). The key feature of HACO is that it can learn to minimize the human intervention and adjust the level of automation to the learning agent adaptively during the training. As shown in Figure 1 A, HACO allows the human expert to take over the human-AI system in a proactive manner. If the human decides to intervene in the action of the agent, he/she should demonstrate the correct actions to overcome current undesired situations to the learning agent. The human intervention and the partial demonstration are two sources of informative training data. We use offline RL technique to maintain a proxy value function of the human-AI mixed behavior policy even though the agent doesn’t have the access to the environmental reward during training. To encourage the exploration in the state-action space permitted by human, we also maximize the entropy of action distribution of the agent if the agent is not taken over.
Experiments in the virtual driving environments MetaDrive (Li et al., 2021) and CARLA (Dosovitskiy et al., 2017) show that, with an economic human budget, HACO outperforms RL and IL baselines with a substantial margin in terms of sample efficiency, performance, safety, and generalizability in the unseen testing environment. Thus the human-AI copilot optimization is an efficient learning paradigm to inject human knowledge in an online setting.
2 RELATED WORK
Learning from Demonstration. Passive imitation learning such as behavior cloning (Widrow, 1964; Osa et al., 2018; Huang et al., 2020; Sun et al., 2020) and recently proposed offline RL methods (Kumar et al., 2020; Fujimoto et al., 2018; Wu et al., 2019) train agents from an outof-the-shelf data set and guarantee the training safety, since no interaction with the environment is needed. Inverse RL methods (Ng et al., 2000; Abbeel & Ng, 2004; Fu et al., 2017; Bloem & Bambos, 2014) learn a reward function from the human demonstration and then use it to incentivize the agents to master the intended behaviors. Proposed more recently, GAIL (Ho & Ermon, 2016) and its variants (Song et al., 2018; Sasaki et al., 2018; Kostrikov et al., 2018) and SQIL (Reddy et al., 2019) compare the trajectory similarity between agents and humans and thus require the agent to interact with the environment. Similar to RL methods, this paradigm exposes the agent to potentially dangerous situations.
Human-in-the-loop Learning Methods. Many works focus on incorporating human in the training loop of conventional RL or IL paradigms. DAgger (Ross et al., 2011) and its extended methods (Kelly et al., 2019; Zhang & Cho, 2016; Hoque et al., 2021) correct the compounding error (Ross & Bagnell, 2010) of behavior cloning by periodically requesting expert to provide more demonstration. Instead of proving demonstration upon requests, Human-Gated DAgger (HG-DAgger) (Kelly et al., 2019), Expert Intervention Learning (EIL) (Spencer et al., 2020) and Intervention Weighted Regression (IWR) (Mandlekar et al., 2020) empower the expert to intervene exploration and carry the agent to safe states. However, these methods do not impose constraints to reduce human intervention and do not utilize the data from the free exploration of the agent. Human subjects can also involve in the loop providing preferences based on evaluative feedback on two behavior sequences generated by the agent (Christiano et al., 2017; Sadigh et al., 2017; Palan et al., 2019; Ibarz et al., 2018; Cui & Niekum, 2018).
Human-AI copilot or shared autonomy is a more intimate form of the human-in-the-loop methods. The AI agent and human are working together simultaneously to achieve a common goal. By giving human guidance and feedback at run-time instantly, the explorable state and action spaces can be greatly narrowed down (Saunders et al., 2018). The learning goal can further match the task objective by providing extra human feedback combined with reward function (Reddy et al., 2018; Warnell et al., 2018; Wu et al., 2021; Cederborg et al., 2015; Arumugam et al., 2019). Human-AI copilot is helpful and practical when applying RL to real world tasks where safety constraints must be satisfied (Garcıa & Fernández, 2015; Amodei et al., 2016; Bharadhwaj et al., 2020; Alshiekh et al., 2018). In our previous work (Peng et al., 2021), we made attempt to develop a method called Expert-Guided Policy Optimization (EGPO) where a PPO expert policy is involved to monitor the learning agent. The difference can be summarized as twofold: (1) We substitute the expert with human and design special mechanism to mitigate the delayed feedback error; (2) Based on the comprehensive ablation study and prototyping, we remove redundant designs like takeover function and the need of reward function, making the proposed method simple yet effective.
Reducing human burden is a major challenge in human-in-the-loop methods. A feasible solution is to learn an intervention function that imitates human intervention signals and stops the catastrophic actions of agents (Kelly et al., 2019; Zhang & Cho, 2016; Saunders et al., 2017; Abel et al., 2017), which can relieve the mental stress of the human subject during training. In this work, we devise our learning scheme explicitly to include the human cognitive cost as one of the objectives to minimize.
3 HUMAN-AI COPILOT OPTIMIZATION
In this section, we introduce Human-AI Copilot Optimization (HACO), an efficient learning algorithm that trains agents from human interventions, partial demonstrations and free exploration. For human-in-the-loop learning, it is essential to design when and how to engage human subjects. The major issue is the cognitive cost of the human subject (Zhang et al., 2021). Frequent querying might bring tremendous cognitive cost and exhaust the human expert, causing incorrect or delayed feedback that hinders the training. Thus the proposed pipeline aims to minimize the human intervention cost during the training, which reduces the reliance on the expert’s demonstration over time and improves the learning agent’s autonomy. The overall workflow of HACO is presented in Algorithm 1.
3.1 HUMAN-AI COPILOT TRAINING PARADIGM
We aim to learn an autonomous agent with policy πn(an|s) that can make informed action an in state s. As shown in Fig. 1, we frame the human-AI copilot paradigm that extends the standard reinforcement learning diagram by incorporating a human expert. At each step, the human expert oversees current state and decides whether to intervene. If necessary, he/she will execute human action ah to overwrite the agent’s action an. We denote the human intervention by a Boolean indicator I(s, an) and thus the action applied to the environment is called the safe action â = I(s, an)ah + (1 − I(s, an))an. Denoting the human policy as πh, the actual trajectories occurred during training are derived from a shared behavior policy πb:
πb(a|s) = πn(a|s)(1− I(s, a)) + πh(a|s)G(s), (1) wherein G(s) = ∫ a′∈A I(s, a ′)πn(a ′|s)da′ is the probability of the agent choosing an action that will be rejected by the human.
We call the transition sequences during the takeover {(st, an,t, ah,t, I(st, an,t), st+1), ...} as the partial demonstration. The partial demonstration and the free exploration transitions will be recorded in the replay buffer B and fed to the training pipeline. Note that we do not require to store environmental reward and cost into the buffer since the proposed method does not need them.
In the human-AI copilot training, the human is obligated to guide the agent learning and safeguard the learning process by proactively taking over the control if necessary. This paradigm rules out the dispensable states and mitigates the safety concern in free exploration of RL and active imitation learning methods (Ross et al., 2011). Different from previous offline RL works training from fixed dataset (Bojarski et al., 2016; Ho & Ermon, 2016; Reddy et al., 2019; Kumar et al., 2020; Fujimoto et al., 2018; Wu et al., 2019) where no closed loop feedback is accessible, the human-AI copilot training produces partial demonstrations that contains the necessary human knowledge to overcome dangerous situations into the learning. The copilot nature alleviates the distributional shift problem, since the human intervenes when the agent performs suspicious behaviors, so that there is a continuity of the state visitation between the agent and the expert.
In next section, we will introduce how we instantiate the human-AI copilot paradigm with a humanefficient algorithm that can effectively optimize the agent toward safe and high-performing policy.
3.2 LEARNING OBJECTIVES
We form three objectives that fully utilize the human data: (1) Agent should maximize a proxy value functionQ(s, a) which reflects human intentions on how to finish the task. (2) Agent should explore thoroughly to visit the state-action subspace permitted by the human. Concretely, we maximize the action distribution entropyH(π(·|s)). (3) Agent should maximize the level of automation and reduce human intervention. Episodic human intervention is estimated by an intervention value function QI(s, a) based on the step-wise intervention cost C(s, a). Thus the overall learning objective of HACO becomes:
max π
E[Q(s, a) +H(π)−QI(s, a)]. (2)
We then discuss the practical implementation of aforementioned design goals.
Proxy value function. HACO follows reward-free setting so we can’t estimate the expected stateaction value based on a ground-truth reward function defined by the environment. We instead estimate a proxy value function Q(s, a;φ) (φ is model parameters) that captures the ordinal preference of human experts, which implicitly reflects human intentions. We utilize the conservative Q-learning (Kumar et al., 2020) and form the optimization problem of the proxy value function as:
min φ E (s,an,ah,I(s,an))∼B [I(s, an)(Q(s, an;φ)−Q(s, ah;φ))]. (3)
The above optimization objective can be interpreted as being optimistic to the human’s action ah and pessimistic to the agent’s action an. The proxy value function learns to represent the high-value state-action subspace preferred by the human expert.
Entropy regularization. If the learning agent visits human-preferable subspace insufficiently during free explorable sampling, the states evoking high proxy value are rarely encountered, making the back-propagation of the proxy value to preceding states difficult and thus damaging the learning. To encourage exploration, we adopt the entropy regularization technique in (Haarnoja et al., 2018) and forms auxiliary signal to update the proxy value function apart from Eq. 3:
min φ E (st,ât,st+1)∼B [y −Q(st, ât;φ)]2, y = γ E a′∼πn(·|st+1) [Q(st+1, a ′;φ′)− α log πn(a′|st+1)],
(4) wherein ât is the executed action at state st, φ′ denotes the delay updated parameter of the target network, γ is the discount factor. Since the environment reward is not accessible to HACO, we remove the reward term in the update target y. Combining Eq. 3 and Eq. 4, the formal optimization objective of the proxy value function becomes:
min φ E B [(y −Q(st, ât;φ))2 + I(st, an,t)(Q(st, an,t;φ)−Q(st, ah,t;φ))]. (5)
Algorithm 1: The workflow of HACO during training 1 Initialize an empty replay buffer B 2 while Training is not finished do 3 while Episode is not terminated do 4 an,t ∼ πn(·|st) Retrieve agent’s action 5 I(st, an,t)← Human expert decides whether to intervene by observing current state st 6 if I(st, an,t) is True then 7 ah,t ← πh(·|st) Retrieve human’s action 8 Apply ah,t to the environment 9 else
10 Apply an,t to the environment 11 if I(st, an,t) is True and I(st−1, an,t−1) is False then 12 C(st, an,t)← Compute intervention cost following Eq. 6 13 else 14 C(st, an,t)← 0 Set intervention cost to zero 15 Record st, an,t, I(st, an,t) and ah,t (if I(st, an,t)) to the buffer B 16 Update proxy value Q, intervention value QI and policy π according to Eq. 5, Eq. 7, Eq. 8
respectively
Reducing human interventions. Directly optimizing the agent policy according to the proxy value function will lead to failure when evaluating the agent without human participation. This is because Q(s, a) represents the proxy value of the mixed behavior policy πb instead of the learning agent’s πn due to the existence of human intervention. It is possible that the agent learns to deliberately abuse human intervention by always taking actions that violate human intentions, such as driving off the road when near the boundary, which forces human to take over and provide demonstrations. In this case, the level of automation for the agent is low and the human subject exhausts to provide demonstrations. Ablation study result in Table 2(c) illustrates this phenomenon.
To economically utilize human budget and reduce the human interventions over time, we punish the agent action that triggers human intervention in a mild manner by using the cosine similarity between agent’s action and human’s action as the intervention cost function in the form below:
C(s, an) = 1− an
Tah
||an||||ah|| , ah ∼ πh(·|s). (6)
The agent will receive large penalty only when its action is significantly different from the expert action in terms of cosine similarity.
A straightforward form of C is a constant +1 when human expert issues intervention. However, we find that there usually exists temporal mismatch between human intervention and faulty actions so that the intervention cost is given to the agent at a delayed time step t + . It is possible that the agent’s action an,t+ is a correct action that saves the agent itself from dangers but is mistakenly marked as faulty action that triggers human intervention. In the ablation study, we find that using the constant cost raises inferior performance compared to the cosine similarity.
As shown in Line 11-14 of Algorithm 1, we only yield non-zero intervention cost at the first step of human intervention. This is because the human intervention triggered by the exact action an,t indicates this action violates the underlying intention of human at this moment. Minimizing the chance of those actions will increase the level of automation.
To improve the level of automation, we form an additional intervention value function QI(s, a) as the expected cumulative intervention cost, similar to estimating the state-action value in Q-learning through Bellman equation:
QI(st, an,t) = C(st, an,t) + γ E st+1∼B,at+1∼πn(·|st+1) [QI(st+1, at+1)]. (7)
This value function is used to directly optimize the policy.
Learning policy. Using the entropy-regularized proxy value function Q(s, a) as well as the intervention value function QI(s, a), we form the the policy improvement objective as:
max θ E st∼B
[Q(st, an)− α log πn(an|st; θ)−QI(st, an)], an ∼ πn(·|st; θ). (8)
4 EXPERIMENTS
4.1 EXPERIMENTAL SETTINGS
Task. We focus on the driving task in this work. This is because driving is an important decision making problem with a huge social impact, where safety and training efficiency are critical. Since many researches on autonomous driving employ human in a real vehicle (Bojarski et al., 2016; Kelly et al., 2019), the human safety and human cognitive cost become practical challenges that limit the application of learning-based methods in industries. Therefore, the driving task is an ideal benchmark for the human-AI copilot paradigm.
Simulator. Considering the potential risks of employing human subjects in physical experiments, we benchmark different approaches in the driving simulator. We employ a lightweight driving simulator MetaDrive (Li et al., 2021), which preserves the capacity to evaluate the safety and generalizability in unseen environments. The simulator is implemented based on Panda3D (Goslin & Mine, 2004) and Bullet Engine that has high efficiency as well as accurate physics-based 3D kinetics. MetaDrive uses procedural generation to synthesize an unlimited number of driving maps for the split of training and test sets, which is useful to benchmark the generalization capability of different approaches in the context of safe driving. Some generated driving scenes are presented in Fig. 2. The simulator is also extremely efficient and flexible so that we can run the human-AI copilot experiment in real-time. Though we mainly describe the setting of MetaDrive in this section, we also experiment on CARLA (Dosovitskiy et al., 2017) simulator in Sec. 4.3.
Training Environment. In the simulator, the task for the agent is to steer the target vehicle with low-level control signal, namely acceleration, brake and steering, to reach the predefined destination and receive a success flag. The ratio of episodes where the agent successfully reaches the destination is called the success rate. To increase the difficulty of the task, we scatter obstacles randomly in each driving scene such as movable traffic vehicles, fixed traffic cones, and warning triangles.
The observation contains (1) the current states such as the steering, heading, velocity and relative distance to boundaries etc., (2) the navigation information that guides the vehicle toward the destination, and (3) the surrounding information encoded by a vector of 240 Lidar-like distance measures of the nearby vehicles.
Though HACO does not receive environmental reward during training, we provide reward function to train baseline methods and evaluate HACO in test time. The reward function contains a dense driving reward, speed reward and a sparse terminal reward. The driving reward measures the longitudinal movement toward destination. We also reward agent according to its velocity and give the sparse reward +20 when the agent arrives at the destination.
Each collision to the traffic vehicles or obstacles yields +1 environmental cost. Note that HACO can not access this cost during the training. This cost is used to train safe RL baselines as well as for testing the safety of trained policies. We term the episodic cost as safety violation which is the measurement on the safety of a policy.
We invite the human expert to supervise the real-time exploration of the learning agent with hands on the steering wheel, as shown in the Fig. 1B. When a dangerous situation is going to happen, the human takes over the vehicle by pressing the paddle besides the wheel and starts controlling the vehicle by steering the wheel and stepping the pedals.
Split of training and test sets. Different from the conventional RL setting where the agent is trained and tested in the same fixed environment, we focus on evaluating the generalization performance through testing the trained agents in separated test environments. We split the driving scenes into the training set and test set with 50 different scenes in each set. After each training iteration, we roll out the learning agent without guardian in the test environments and record success rate and safety violation given by the environment and present it in Table 1.
* During HACO training, in 8316 ± 497.90 steps out of the total 30K steps the human expert intervenes and overwrites the agent’s actions. The whole training takes about 50 minutes.
Implementation details. We conduct experiments on the driving simulator and implement algorithms using RLLib (Liang et al., 2018), an efficient distributed learning system. When training the baselines, we host 8 concurrent trials in an Nvidia GeForce RTX 2080 Ti GPU. Each trial consumes 2 CPUs with 8 parallel rollout workers. Except human-in-the-loop experiments, all baseline experiments are repeated 5 times with different random seeds. The main experiments of HACO is conducted on a local computer with an Nvidia GeForce RTX 2070 and repeat 3 times. The ablations and baseline human-in-the-loop experiments repeat once due to the limited human budget. One human subject participates in each experiment. In all tables and figures, we provide the standard deviation if the experiments are repeated multiple runs with different random seeds. Information about other hyper-parameters is given in the Appendix.
4.2 BASELINE COMPARISON
We compare our method to vanilla RL and Safe RL methods which inject the human intention and constraint through the pre-defined reward function and cost function. We test native RL methods, PPO (Schulman et al., 2017) and SAC (Haarnoja et al., 2018), with cost added to the reward as auxiliary negative reward, called reward shaping (RS). Three common safe RL baselines Constraint Policy Optimization (CPO) (Achiam et al., 2017), PPO-Lagrangian (Stooke et al., 2020), SACLagrangian (Ha et al., 2020) are evaluated.
Apart from the RL methods, we also generate a human demonstration dataset containing one-hour expert’s demonstrations where there are about 36K transitions in the training environments. For the high-quality demonstrations in the dataset, the success rate of the episodes reaches 98% and the safety violation is down to 0.16. Using this dataset, we evaluate passive IL method Behavior Cloning, active IL method GAIL (Ho & Ermon, 2016) and offline RL method CQL (Kumar et al., 2020). We also run the Human-Gated DAgger (HG-DAgger) (Kelly et al., 2019) and Intervention Weighted Regression (IWR) (Mandlekar et al., 2020) as the baselines of human-in-the-loop methods based on this dataset and the human-AI copilot workflow.
Training-time Safety. The training-time safety is measured by the total training safety violation, the total number of critical failures occurring in the training. Note that the environmental cost here is different from the human intervention cost in HACO. As illustrated in Table 1 and Fig. 3A, HACO achieves huge success in training time safety. Apart from the empirical results, we provide proof to show the training safety can be bound by the guardian in Appendix. Under the protection of the human expert, HACO yields only 30.14 total safety violations in the whole training process, two orders of magnitude better than other RL baselines, even though HACO does not access the environmental cost. IWR and HG-DAgger also achieve drastically lower training safety violations, showing the power of human-in-the-loop methods. The most competitive RL baseline SAC-RS, which achieves similar test success rate, causes averagely 2767.77 training safety violations which are much higher
than HACO. The active IL method GAIL also has significantly higher safety violations than HACO and its performance is unsatisfactory.
From the perspective of safety, we find that the reward shaping technique is inferior compared to the Lagrangian method, both for SAC and PPO variants. PPO causes more violations than SAC, probably due to the relatively lower sample efficiency and slower convergence speed.
Sample Efficiency and Human Cognitive Cost. The human-AI system is not only protected so well by the human, but achieves superior sample efficiency with limited data usage. As shown in Fig. 3A and Table 1, we find that HACO is an order of magnitude more efficient than RL baselines. HACO achieves 0.83 test success rate by merely interacting with the environment in the 30K steps, wherein only averagely 8,316 steps the human provides safe actions as demonstration. During nearly 50 minutes of human-AI copilot, there are only 27% steps that the human provides demonstrations.
Human-in-the-loop baselines IWR and HG-DAgger consume 50K steps of human budget and only IWR can achieve satisfactory success rate. By prioritizing samples from human intervention, IWR manages to learn key actions from human intervention to escape dangerous situations caused by the compounding error. Without re-weighting the human takeover data, HG-Dagger fails to learn from a few but important human demonstrations. The learning curves of these two methods can be found in the Appendix.
Unlike the success of HACO, all the learning-from-demonstration methods fail with the dataset containing 36K transitions. Compared to IL methods which optimize agents to imitate exact actions at each time step, HACO considers the learning on the trajectory basis. We incentivize the agent to choose an action that can bring potential return in future trajectory, instead of only mimicking the expert’s behaviors at each step. On the other hand, HACO gathers expert data in an online manner through human-AI copilot, which better mitigates the distributional shift severe in offline RL methods.
Learning Dynamics. The intervention minimization mechanism in HACO reduces human cognitive cost. As shown in Fig. 3B, the takeover rate gradually decreases in the course of learning. The curve of episodic intervention cost suggests that the human intervention frequency becomes lower and the similarity between agent’s action and human’s action increases. We also provide visualization of the learned proxy value function in the Appendix, showing that the learning scheme of HACO can effectively encode human preference into the proxy values.
4.3 ABLATION STUDY
Takeover Policy Analysis. We request the human subjects to try two intervention strategies. The first is to take over in a low frequency and produce a long trajectory at each intervention. In this way the intervention cost becomes sparse. The other strategy is to intervene more frequently and provide fragmented demonstrations. In Table 2(a), the experiment shows that the proposed HACO works better with dense human intervention signals. Agent trained with long trajectories achieves inferior success rate and episodic reward than agents trained with dense intervention signals.
Cosine Similarity Cost Function. As shown in Table 2(b), we replace the intervention cost function in Eq. 6 to a constant value +1 if human intervention happens. We find the agent learns to stay in the spawn points and does not move at all in test time. As discussed in Sec. 3.2, it is possible that the human intervenes in incorrect timing. This makes agent fail to identify how to drive correctly. Using the negative cosine similarity to measure the divergence between agent and human’s actions alleviates this phenomenon since the human intervention penalty is down-weighted when the agent provides action that adheres human intention.
Intervention Minimization. As shown in Table 2(c), when removing the intervention minimization mechanism, the agent drives directly toward the boundary. This is because the agent learns to abuse human expert to take over all the time, which increases proxy values but causes consistent out-ofthe-road failures in testing. This result shows the importance of intervention minimization.
CARLA Experiment. To test the generality of HACO, we run HACO in the CARLA simulator (Dosovitskiy et al., 2017). We use the top-down semantic view provided by CARLA as the input and a 3-layer CNN as the feature extractor for HACO and the PPO baseline. For PPO, the reward follows the setting described in CARLA and is based on the velocity and the completion of the road. We train HACO (with a human expert) and PPO in CARLA town 1 and report the test performance in CARLA town 2. Table 3 shows that the proposed HACO can be successfully deployed in the CARLA simulator with visual observation and achieve comparable results. Also, it can train the driving agent with a new CNN feature-extractor in 10 minutes with only 8,000 samples in the environment. The video is available at: https://decisionforce.github.io/HACO/.
5 CONCLUSION
We develop an efficient human-in-the-loop learning method, Human-AI Copilot Optimization (HACO), which trains agents from the human interventions and partial demonstrations. The method incorporates the human expert in the interaction between agent and environment to ensure safe and efficient exploration. The experiments on safe driving show that the proposed method achieves superior training-time safety, outperforming RL and IL baselines. Besides, it shows a high sample efficiency for rapid learning. The constrained optimization technique is used to prevent the agent from excessively exploiting the human expert, which also decreases the takeover frequency and saves valuable human budget.
One limitation of this work is that the trained agents behave conservatively compared to the agents from RL baselines. Aiming to ensure the training time safety of the copilot system, human expert typically slow the vehicle down to rescue it from risky situations. This makes the agent tend to drive slowly and exhibit behaviors such as frequent yielding in the intersection. In future work, we will explore the possibility of learning more sophisticated skills.
Acknowledgments This project was supported by the Centre for Perceptual and Interactive Intelligence (CPII) Ltd under InnoHK supported by the Innovation and Technology Commission.
ETHICS STATEMENT
The proposed Human-AI Copilot Optimization algorithm aims at developing a new human-friendly human-in-the-loop training framework. We successfully increase the level of automation after human-efficient training. We believe this work has a great positive social impact which advances the development of more intelligent AI systems that costs less human burdens.
We employ human subjects to participate in the experiments. Human subjects can stop the experiment if any discomfort happens. No human subjects were harmed in the experiments since we test in the driving simulator. The human subjects earn an hourly salary more than average in our community. Each experiment lasts near one hour. Human participants will rest at least three hours after one experiment. During training and data processing, no personal information is revealed in the collected dataset or the trained agents.
A MAIN THEOREM AND THE PROOF
In this section, we derive the upper bound of the discounted probability of failure of HACO, showing that we can bound the training safety with the guardian. Theorem 1 (Upper bound of training risk). The expected cumulative probability of failure Vπb of the behavior policy πb in HACO is bounded by the error rate of the human expert action , the error rate of the human expert intervention κ and the tolerance of the human expert K ′:
Vπb ≤ 1
1− γ [ + κ+
γ 2
1− γ K ′],
wherein K ′ = maxsK(s) = maxs ∫ a∈Ah(s) da ≥ 0 is called human expert tolerance.
The human expert tolerance K ′ will becomes larger, if human relieves its intervention and allows the agent to explore the environment more freely.
The proof is given as follows.
Notations. Before starting, we firstly recap and describe the notations. In HACO, a human subject copilots with the learning agent. The agent’s policy is πn, the human’s policy is πh. Both policies produces action in the bounded action space A ∈ R|A|. The human expert decides to intervention under certain state and agent’s action an. The human intervention is denoted by a Boolean function: I(s, a). The mixed behavior policy πb that produces the real actions applied to the environment is denoted as: πb(a|s) = πn(a|s)(1− I(s, a)) + πh(a|s)G(s), (9) wherein G(s) = ∫ a′∈A I(s, a ′)πn(a ′|s)da′ is a function which denotes the probability of choosing an action that will be rejected by the human.
Therefore, at a given state, we can split the action space into two parts: where intervention will happen or will not happen if the agent sample action in it. We denote the confident action space as:
Ah(s) = {a : I(a|s) is False}. (10) The confident action space contains the actions that will not be rejected by human expert at state s.
We also define the ground-truth indicator Cgt denoting whether the action will lead to unsafe state. This unsafe state is determined by the environment and is not revealed to learning algorithm:
Cgt(s, a) = { 1, if s′ = P(s′|s, a) is an unsafe state, 0, otherwise.
(11)
Therefore, at a given state s the step-wise probability of failure for arbitrary policy π is:
E a∼π(·|s)
Cgt(s, a) ∈ [0, 1]. (12)
Now we denote the cumulative discounted probability of failure as:
Vπ(st) = E τ∼π ∑ t′=t γt ′−tCgt(st′ , at′), (13)
which counts for the chance of entering dangerous states in current time step as well as in future trajectories deduced by the policy π. We use Vπh = Eτ∼πh Vπh(s0) to denote the expected cumulative discounted probability of failure of the human. Following the same definition as Vπh , we can also write the expected cumulative discounted probability of failure of the behavior policy as: Vπb = Eτ∼πb Vπb(s0) = Eπb ∑ t=0 γ tCgt(st, at).
Assumption. Now we introduce two important assumptions on the human expert.
Assumption 1 (Error rate of human action). For all states, the step-wise probability of that the human expert produces an unsafe action is bounded by a small value < 1:
E a∼πh(·|s)
Cgt(s, a) ≤ . (14)
Assumption 2 (Error rate of human intervention). For all states, the step-wise probability of that the human expert does not intervene when agent produces an unsafe action is bounded by a small value κ < 1: ∫
a∈A
[1− I(s, a)]Cgt(s, a)da = ∫
a∈Ah(s)
Cgt(s, a)da ≤ κ. (15)
These two assumptions does not impose any constrain on the structure of the human expert policy.
Lemmas. We propose several useful lemmas and the correspondent proofs, which are used in the main theorem.
Lemma 2 (The performance difference lemma).
Vπb = Vπh + 1
1− γ Es∼Pπb E a∼πb [Aπh(s, a)]. (16)
Here the Pπb means the states are subject to the marginal state distribution deduced by the behavior policy πb. Aπh(s, a) is the advantage of the expert in current state action pair: Aπh(s, a) = Cgt(s, a) + γVπh(s
′) − Vπh(s) and s′ = P(s, a) is the next state. This lemma is proposed and proved by Kakade & Langford (2002) and is useful to show the behavior policy’s safety. In the original proposition, the V and A represents the expected discounted return and advantage w.r.t. the reward, respectively. However, we replace the reward with the indicator Cgt so that the value function Vπb and Vπh presenting the expected cumulative failure probability.
Lemma 3. The cumulative probability of failure of the expert Vπh(s) is bounded for all state:
Vπh(s) ≤
1− γ
Proof. Following Assumption 1:
Vπh(st) = E πh [ ∞∑ t′=t γt ′−tCgt(st′ , at′)] = ∞∑ t′=t γt ′−t E πh [Cgt(st′ , at′)] ≤ ∞∑ t′=t γt ′−t = 1− γ (17)
Theorem. We introduce the main theorem of this work above, which shows that the training safety is related to the error rate on action and the error rate on intervention κ of the human expert. The proof is given as follows.
Proof. We firstly decompose the advantage by splitting the behavior policy:
E a∼πb(·|s) Aπh(s, a) = ∫ a∈A πb(a|s)Aπh(s, a)
= ∫ a∈A {πn(a|s)(1− I(s, a))Aπh(s, a) + πh(a|s)G(s)Aπh(s, a)}da
= ∫ a∈Ah(s) [πn(a|s)Aπh(s, a)]da+G(s) E a∼πh [Aπh(s, a)].
(18)
The second term is equal to zero according to the definition of advantage. We only need to compute the first term. We expand the advantage into detailed form, we have:
E a∼πb(·|s) Aπh(s, a) = ∫ a∈Ah(s) [πn(a|s)Aπh(s, a)]da
= ∫ a∈Ah(s) πn(a|s)[Cgt(s, a) + γVπh(s′)− Vπh(s)]da
= ∫ a∈Ah(s) π(a|s)Cgt(s, a)da
︸ ︷︷ ︸ (a)
+ γ ∫ a∈Ah(s) π(a|s)Vπh(s′)da
︸ ︷︷ ︸ (b)
− ∫
a∈Ah(s)
π(a|s)Vπh(s)da
︸ ︷︷ ︸ (c)
.
(19)
Following the Assumption 1, the term (a) can be bounded as:
∫ a∈Ah(s) π(a|s)Cgt(s, a)da ≤ ∫ a∈Ah(s) Cgt(s, a)da ≤ κ. (20)
Following the Lemma 3, the term (b) can be written as:
γ ∫ a∈Ah(s) π(a|s)Vπh(s′)da ≤ γ ∫ a∈Ah(s) Vπh(s ′)da ≤ γ 1− γ ∫ a∈Ah(s) da = γ 1− γ K(s), (21)
wherein K(s) = ∫ a∈Ah(s) da denoting the area of human-preferable region in the action space. It is a function related to the human expert and state.
The term (c) is always non-negative, so after applying the minus to term (c) the negative term will always be ≤ 0. Aggregating the upper bounds of three terms, we have the bound on the advantage:
E a∼πb
Aπh(s, a) ≤ κ+ γ
1− γ K(s) (22)
Now we put Eq. 22 as well as Lemma 3 into the performance difference lemma (Lemma 2), we have:
Vπb = Vπh + 1
1− γ Es∼Pπb E a∼πb [Aπh(s, a)]
≤
1− γ +
1
1− γ [κ+
γ
1− γ max s K(s)]]
= 1
1− γ [ + κ+
γ 2
1− γ K ′],
(23)
wherein K ′ = maxsK(s) = maxs ∫ a∈Ah(s) da ≥ 0 is correlated to the tolerance of the expert. If the human expert has higher tolerance then K ′ should be greater.
Now we have proved the upper bound of the discounted probability of failure for the behavior policy in our method.
B VISUALIZATION OF LEARNED PROXY VALUE FUNCTION
To understand how well the proxy value function learns, we visualize 4 common scenarios in 4 pairs of figures as shown above. The left sub-figure of each pair shows a top-down view of a driving scenario, where a sequence of snapshots of the control vehicle is plotted, showing its trajectory. The right sub-figure of each pair overlaps the heatmap of proxy values in the top-down image. We manually position the vehicle in different location in the map and query the policy to get action and run the proxy Q function to get the value Q(s, a). Region in red color indicates the proxy value is low if the agent locates there and vice versa.
In Fig. 4(a), the agent performs a lane change behavior to avoid potential collisions with a traffic vehicle which is merging into the middle lane. The region near the traffic vehicle has extremely low values and thus the agent has small probability to enter this area.
In Fig. 4(b), traffic cones spread in the left lane. The agent learns to avoid crashes and the proxy value heatmap shows a large region of low values.
As shown in the trajectory in Fig. 4(c), though the agent can choose to bypass the traffic vehicle in both left-hand side or right-hand side, it chooses the right-hand side. The heatmap shows that much higher proxy Q value is produced on right bypassing path compared to left path. This behavior resembles the preference of human who prefers right-hand side detour.
In addition, in some ares where paths boundary is ambiguous such as the intersection, the agent manages to learn a virtual boundary in the proxy Q space for efficiently passing these areas, as shown in the Fig. 4(d).
The proxy Q value distribution shown in this section not only explains the avoidance behaviors, but also serves as a good indicator for the learned human preference.
C DETAILS OF HUMAN-IN-THE-LOOP BASELINES
We benchmark the performance of two human-in-the-loop methods HG-DAgger (Kelly et al., 2019) and IWR (Mandlekar et al., 2020). Both methods require warming up through behavior cloning on a pre-collected dataset. In practice, we find that using 10K or 20K steps of human collected data is not enough to initialize the policy with basic driving skills. Therefore, we use the pre-collected human dataset containing 30K transitions to warm up the policies. After warming up, HG-DAgger and IWR then aggregate human intervention data to the training buffer and conduct behavior cloning again to update policy for 4 epochs. In each epoch the human-AI system collects 5000 transitions. The above figure shows the learning curves of IWR and HG-DAgger. As discussed in the main body of paper, we credit the success of IWR to the re-weighting of human intervention data, which is not emphasized in HG-DAgger.
D MORE ZOOM-IN PLOT OF THE LEARNING CURVES
The above figures present the zoomed in learning curves of RL baselines and HACO, showing the superior sample efficiency of HACO compared to RL baselines.
E HYPER-PARAMETERS
Table 4: HACO
Table 5: PPO/PPO-Lag
Hyper-parameter Value
Discounted Factor γ 0.99 τ for target network update 0.005 Learning Rate 0.0001 Environmental horizon T 1500 Steps before Learning start 10000
Cost Limit for SAC-Lag 1
BC iterations for CQL 200000 CQL Loss Temperature β 5 Min Q Weight Multiplier 0.2
Table 7: BC
Hyper-parameter Value
Dataset Size 36,000 SGD Batch Size 32 SGD Epoch 200000 Learning Rate 0.0001
Table 8: CPO
Table 10: HG-DAgger
Hyper-parameter Value
Initializing dataset size 30K Number of data aggregation epoch 4 Interactions per round 5000 SGD batch size 256 Learning rate 0.0004
Table 11: IWR
Hyper-parameter Value
Initializing dataset size 30K Number of data aggregation epoch 4 Interactions per round 5000 SGD batch size 256 Learning rate 0.0004 Re-weight data distribution True | 1. What is the focus and contribution of the paper on human-AI copilot policy learning?
2. What are the strengths of the proposed approach, particularly in its experimental comparisons?
3. What are the weaknesses of the paper regarding its method description, equation errors, and convergence concerns?
4. Do you have any questions about the significance of the proposed method's reliance on batch RL?
5. How does the reviewer assess the complexity and suitability of the driving tasks used in the experiments? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposed a method for driving policy learning based on human-AI copilot. The algorithm learns from human interventions and also tries to minimize the total efforts of human intervention. Comprehensive experiments and comparisons with multiple baselines show that the proposed algorithm can achieve high sample efficiency and reduce unsafe events. The contributions are in the design of the copilot learning method.
Review
Strengths:
This paper is well written with clear logic and goals. The experiments are comprehensive. The proposed method is compared with several types of baselines including RL, safe RL, offline RL, and IL.
The method proposed for co-learning and intervention minimization is solid.
Weakness:
The description of the method can be improved. There seem to be some errors in the equations. For example, in equation (3), the two Q(s, \hat{a}) are the same.
I am not sure how to determine the convergence of the algorithm. Why HACO has much fewer steps than the baselines. Especially during the testing phase, why HACO also has fewer steps?
The main part of the methodology is based on batch RL. The adding of intervention minimization seems to be a marginal contribution.
The driving tasks are considered to be too easy. |
ICLR | Title
Efficient Learning of Safe Driving Policy via Human-AI Copilot Optimization
Abstract
Human intervention is an effective way to inject human knowledge into the loop of reinforcement learning, bringing fast learning and training safety. But given the very limited budget of human intervention, it is challenging to design when and how human expert interacts with the learning agent in the training. In this work, we develop a novel human-in-the-loop learning method called Human-AI Copilot Optimization (HACO). To allow the agent’s sufficient exploration in the risky environments while ensuring the training safety, the human expert can take over the control and demonstrate to the agent how to avoid probably dangerous situations or trivial behaviors. The proposed HACO then effectively utilizes the data collected both from the trial-and-error exploration and human’s partial demonstration to train a high-performing agent. HACO extracts proxy state-action values from partial human demonstration and optimizes the agent to improve the proxy values while reducing the human interventions. No environmental reward is required in HACO. The experiments show that HACO achieves a substantially high sample efficiency in the safe driving benchmark. It can train agents to drive in unseen traffic scenes with a handful of human intervention budget and achieve high safety and generalizability, outperforming both reinforcement learning and imitation learning baselines with a large margin. Code and demo videos are available at: https://decisionforce.github.io/HACO/.
1 INTRODUCTION
How to effectively inject human knowledge into the learning process is one of the key challenges to training reliable autonomous agents in safety-critical applications. In reinforcement learning (RL), researchers can inject their intentions into the carefully designed reward function. The learning agent freely explores the environment to collect the data and develops the desired behaviors induced by the reward function. However, RL methods bear two drawbacks that limit their applications in safety-critical tasks: First, the nature of trial-and-error exploration exposes RL agent to dangerous situations (Saunders et al., 2017). Second, it is difficult to summarize all the intended behaviors to be learned into the reward function. Taking the driving vehicle as an example, an ideal policy should obtain a set of skills, such as overtaking, yielding, emergent stopping, and negotiation with other vehicles. It is intractable to manually design a reward function that leads to the emergence of all those behaviors in the trained agent. To mitigate these two challenges, practitioners enforce the human intentions through imitation learning (IL) where the agent is trained to imitate the expert-generated state and action sequences. During the demonstration, the premature agent does not interact with the risky environment and thus the training safety is ensured. High-quality expert demonstrations provide direct the optimal solution for the agent to imitate from. However, IL paradigm suffers from the distributional shift problem (Ross & Bagnell, 2010; Ross et al., 2011) while the induced skills are not sufficiently robust with respect to changes in the control task (Camacho & Michie, 1995).
Different from vanilla RL or IL, human-in-the-loop learning is an alternative paradigm to inject human knowledge, where a human subject accompanies the agent and oversees its learning process. Previous works require the human to either passively advise which action is good (Mandel et al., 2017) or evaluate the collected trajectories (Christiano et al., 2017; Guan et al., 2021; Reddy et al.,
∗Quanyi Li and Zhenghao Peng contribute equally to this work.
2018; Warnell et al., 2018; Christiano et al., 2017; Sadigh et al., 2017; Palan et al., 2019). This kind of passive human involvement exposes the human-AI system to risks since the agent explores the environment without protection. Some other works require the human to merely intervene in the exploration by terminating the episode (Saunders et al., 2018; Zhang & Cho, 2016), but it is not practical to terminate and reset the environment instantly in the real world (Xu et al., 2020). Intervening and taking over the control from the learning agent is a natural approach to safeguard the human-AI system (Kelly et al., 2019; Spencer et al., 2020). However, a challenge exhibited in previous works is the budget of human intervention. Since human cognitive resource is precious and limited, it is essential to carefully design when and how the human expert involves in the learning process so that the human knowledge can be injected effectively.
In this work, we propose an efficient human-in-the-loop learning method called Human-AI Copilot Optimization (HACO). The key feature of HACO is that it can learn to minimize the human intervention and adjust the level of automation to the learning agent adaptively during the training. As shown in Figure 1 A, HACO allows the human expert to take over the human-AI system in a proactive manner. If the human decides to intervene in the action of the agent, he/she should demonstrate the correct actions to overcome current undesired situations to the learning agent. The human intervention and the partial demonstration are two sources of informative training data. We use offline RL technique to maintain a proxy value function of the human-AI mixed behavior policy even though the agent doesn’t have the access to the environmental reward during training. To encourage the exploration in the state-action space permitted by human, we also maximize the entropy of action distribution of the agent if the agent is not taken over.
Experiments in the virtual driving environments MetaDrive (Li et al., 2021) and CARLA (Dosovitskiy et al., 2017) show that, with an economic human budget, HACO outperforms RL and IL baselines with a substantial margin in terms of sample efficiency, performance, safety, and generalizability in the unseen testing environment. Thus the human-AI copilot optimization is an efficient learning paradigm to inject human knowledge in an online setting.
2 RELATED WORK
Learning from Demonstration. Passive imitation learning such as behavior cloning (Widrow, 1964; Osa et al., 2018; Huang et al., 2020; Sun et al., 2020) and recently proposed offline RL methods (Kumar et al., 2020; Fujimoto et al., 2018; Wu et al., 2019) train agents from an outof-the-shelf data set and guarantee the training safety, since no interaction with the environment is needed. Inverse RL methods (Ng et al., 2000; Abbeel & Ng, 2004; Fu et al., 2017; Bloem & Bambos, 2014) learn a reward function from the human demonstration and then use it to incentivize the agents to master the intended behaviors. Proposed more recently, GAIL (Ho & Ermon, 2016) and its variants (Song et al., 2018; Sasaki et al., 2018; Kostrikov et al., 2018) and SQIL (Reddy et al., 2019) compare the trajectory similarity between agents and humans and thus require the agent to interact with the environment. Similar to RL methods, this paradigm exposes the agent to potentially dangerous situations.
Human-in-the-loop Learning Methods. Many works focus on incorporating human in the training loop of conventional RL or IL paradigms. DAgger (Ross et al., 2011) and its extended methods (Kelly et al., 2019; Zhang & Cho, 2016; Hoque et al., 2021) correct the compounding error (Ross & Bagnell, 2010) of behavior cloning by periodically requesting expert to provide more demonstration. Instead of proving demonstration upon requests, Human-Gated DAgger (HG-DAgger) (Kelly et al., 2019), Expert Intervention Learning (EIL) (Spencer et al., 2020) and Intervention Weighted Regression (IWR) (Mandlekar et al., 2020) empower the expert to intervene exploration and carry the agent to safe states. However, these methods do not impose constraints to reduce human intervention and do not utilize the data from the free exploration of the agent. Human subjects can also involve in the loop providing preferences based on evaluative feedback on two behavior sequences generated by the agent (Christiano et al., 2017; Sadigh et al., 2017; Palan et al., 2019; Ibarz et al., 2018; Cui & Niekum, 2018).
Human-AI copilot or shared autonomy is a more intimate form of the human-in-the-loop methods. The AI agent and human are working together simultaneously to achieve a common goal. By giving human guidance and feedback at run-time instantly, the explorable state and action spaces can be greatly narrowed down (Saunders et al., 2018). The learning goal can further match the task objective by providing extra human feedback combined with reward function (Reddy et al., 2018; Warnell et al., 2018; Wu et al., 2021; Cederborg et al., 2015; Arumugam et al., 2019). Human-AI copilot is helpful and practical when applying RL to real world tasks where safety constraints must be satisfied (Garcıa & Fernández, 2015; Amodei et al., 2016; Bharadhwaj et al., 2020; Alshiekh et al., 2018). In our previous work (Peng et al., 2021), we made attempt to develop a method called Expert-Guided Policy Optimization (EGPO) where a PPO expert policy is involved to monitor the learning agent. The difference can be summarized as twofold: (1) We substitute the expert with human and design special mechanism to mitigate the delayed feedback error; (2) Based on the comprehensive ablation study and prototyping, we remove redundant designs like takeover function and the need of reward function, making the proposed method simple yet effective.
Reducing human burden is a major challenge in human-in-the-loop methods. A feasible solution is to learn an intervention function that imitates human intervention signals and stops the catastrophic actions of agents (Kelly et al., 2019; Zhang & Cho, 2016; Saunders et al., 2017; Abel et al., 2017), which can relieve the mental stress of the human subject during training. In this work, we devise our learning scheme explicitly to include the human cognitive cost as one of the objectives to minimize.
3 HUMAN-AI COPILOT OPTIMIZATION
In this section, we introduce Human-AI Copilot Optimization (HACO), an efficient learning algorithm that trains agents from human interventions, partial demonstrations and free exploration. For human-in-the-loop learning, it is essential to design when and how to engage human subjects. The major issue is the cognitive cost of the human subject (Zhang et al., 2021). Frequent querying might bring tremendous cognitive cost and exhaust the human expert, causing incorrect or delayed feedback that hinders the training. Thus the proposed pipeline aims to minimize the human intervention cost during the training, which reduces the reliance on the expert’s demonstration over time and improves the learning agent’s autonomy. The overall workflow of HACO is presented in Algorithm 1.
3.1 HUMAN-AI COPILOT TRAINING PARADIGM
We aim to learn an autonomous agent with policy πn(an|s) that can make informed action an in state s. As shown in Fig. 1, we frame the human-AI copilot paradigm that extends the standard reinforcement learning diagram by incorporating a human expert. At each step, the human expert oversees current state and decides whether to intervene. If necessary, he/she will execute human action ah to overwrite the agent’s action an. We denote the human intervention by a Boolean indicator I(s, an) and thus the action applied to the environment is called the safe action â = I(s, an)ah + (1 − I(s, an))an. Denoting the human policy as πh, the actual trajectories occurred during training are derived from a shared behavior policy πb:
πb(a|s) = πn(a|s)(1− I(s, a)) + πh(a|s)G(s), (1) wherein G(s) = ∫ a′∈A I(s, a ′)πn(a ′|s)da′ is the probability of the agent choosing an action that will be rejected by the human.
We call the transition sequences during the takeover {(st, an,t, ah,t, I(st, an,t), st+1), ...} as the partial demonstration. The partial demonstration and the free exploration transitions will be recorded in the replay buffer B and fed to the training pipeline. Note that we do not require to store environmental reward and cost into the buffer since the proposed method does not need them.
In the human-AI copilot training, the human is obligated to guide the agent learning and safeguard the learning process by proactively taking over the control if necessary. This paradigm rules out the dispensable states and mitigates the safety concern in free exploration of RL and active imitation learning methods (Ross et al., 2011). Different from previous offline RL works training from fixed dataset (Bojarski et al., 2016; Ho & Ermon, 2016; Reddy et al., 2019; Kumar et al., 2020; Fujimoto et al., 2018; Wu et al., 2019) where no closed loop feedback is accessible, the human-AI copilot training produces partial demonstrations that contains the necessary human knowledge to overcome dangerous situations into the learning. The copilot nature alleviates the distributional shift problem, since the human intervenes when the agent performs suspicious behaviors, so that there is a continuity of the state visitation between the agent and the expert.
In next section, we will introduce how we instantiate the human-AI copilot paradigm with a humanefficient algorithm that can effectively optimize the agent toward safe and high-performing policy.
3.2 LEARNING OBJECTIVES
We form three objectives that fully utilize the human data: (1) Agent should maximize a proxy value functionQ(s, a) which reflects human intentions on how to finish the task. (2) Agent should explore thoroughly to visit the state-action subspace permitted by the human. Concretely, we maximize the action distribution entropyH(π(·|s)). (3) Agent should maximize the level of automation and reduce human intervention. Episodic human intervention is estimated by an intervention value function QI(s, a) based on the step-wise intervention cost C(s, a). Thus the overall learning objective of HACO becomes:
max π
E[Q(s, a) +H(π)−QI(s, a)]. (2)
We then discuss the practical implementation of aforementioned design goals.
Proxy value function. HACO follows reward-free setting so we can’t estimate the expected stateaction value based on a ground-truth reward function defined by the environment. We instead estimate a proxy value function Q(s, a;φ) (φ is model parameters) that captures the ordinal preference of human experts, which implicitly reflects human intentions. We utilize the conservative Q-learning (Kumar et al., 2020) and form the optimization problem of the proxy value function as:
min φ E (s,an,ah,I(s,an))∼B [I(s, an)(Q(s, an;φ)−Q(s, ah;φ))]. (3)
The above optimization objective can be interpreted as being optimistic to the human’s action ah and pessimistic to the agent’s action an. The proxy value function learns to represent the high-value state-action subspace preferred by the human expert.
Entropy regularization. If the learning agent visits human-preferable subspace insufficiently during free explorable sampling, the states evoking high proxy value are rarely encountered, making the back-propagation of the proxy value to preceding states difficult and thus damaging the learning. To encourage exploration, we adopt the entropy regularization technique in (Haarnoja et al., 2018) and forms auxiliary signal to update the proxy value function apart from Eq. 3:
min φ E (st,ât,st+1)∼B [y −Q(st, ât;φ)]2, y = γ E a′∼πn(·|st+1) [Q(st+1, a ′;φ′)− α log πn(a′|st+1)],
(4) wherein ât is the executed action at state st, φ′ denotes the delay updated parameter of the target network, γ is the discount factor. Since the environment reward is not accessible to HACO, we remove the reward term in the update target y. Combining Eq. 3 and Eq. 4, the formal optimization objective of the proxy value function becomes:
min φ E B [(y −Q(st, ât;φ))2 + I(st, an,t)(Q(st, an,t;φ)−Q(st, ah,t;φ))]. (5)
Algorithm 1: The workflow of HACO during training 1 Initialize an empty replay buffer B 2 while Training is not finished do 3 while Episode is not terminated do 4 an,t ∼ πn(·|st) Retrieve agent’s action 5 I(st, an,t)← Human expert decides whether to intervene by observing current state st 6 if I(st, an,t) is True then 7 ah,t ← πh(·|st) Retrieve human’s action 8 Apply ah,t to the environment 9 else
10 Apply an,t to the environment 11 if I(st, an,t) is True and I(st−1, an,t−1) is False then 12 C(st, an,t)← Compute intervention cost following Eq. 6 13 else 14 C(st, an,t)← 0 Set intervention cost to zero 15 Record st, an,t, I(st, an,t) and ah,t (if I(st, an,t)) to the buffer B 16 Update proxy value Q, intervention value QI and policy π according to Eq. 5, Eq. 7, Eq. 8
respectively
Reducing human interventions. Directly optimizing the agent policy according to the proxy value function will lead to failure when evaluating the agent without human participation. This is because Q(s, a) represents the proxy value of the mixed behavior policy πb instead of the learning agent’s πn due to the existence of human intervention. It is possible that the agent learns to deliberately abuse human intervention by always taking actions that violate human intentions, such as driving off the road when near the boundary, which forces human to take over and provide demonstrations. In this case, the level of automation for the agent is low and the human subject exhausts to provide demonstrations. Ablation study result in Table 2(c) illustrates this phenomenon.
To economically utilize human budget and reduce the human interventions over time, we punish the agent action that triggers human intervention in a mild manner by using the cosine similarity between agent’s action and human’s action as the intervention cost function in the form below:
C(s, an) = 1− an
Tah
||an||||ah|| , ah ∼ πh(·|s). (6)
The agent will receive large penalty only when its action is significantly different from the expert action in terms of cosine similarity.
A straightforward form of C is a constant +1 when human expert issues intervention. However, we find that there usually exists temporal mismatch between human intervention and faulty actions so that the intervention cost is given to the agent at a delayed time step t + . It is possible that the agent’s action an,t+ is a correct action that saves the agent itself from dangers but is mistakenly marked as faulty action that triggers human intervention. In the ablation study, we find that using the constant cost raises inferior performance compared to the cosine similarity.
As shown in Line 11-14 of Algorithm 1, we only yield non-zero intervention cost at the first step of human intervention. This is because the human intervention triggered by the exact action an,t indicates this action violates the underlying intention of human at this moment. Minimizing the chance of those actions will increase the level of automation.
To improve the level of automation, we form an additional intervention value function QI(s, a) as the expected cumulative intervention cost, similar to estimating the state-action value in Q-learning through Bellman equation:
QI(st, an,t) = C(st, an,t) + γ E st+1∼B,at+1∼πn(·|st+1) [QI(st+1, at+1)]. (7)
This value function is used to directly optimize the policy.
Learning policy. Using the entropy-regularized proxy value function Q(s, a) as well as the intervention value function QI(s, a), we form the the policy improvement objective as:
max θ E st∼B
[Q(st, an)− α log πn(an|st; θ)−QI(st, an)], an ∼ πn(·|st; θ). (8)
4 EXPERIMENTS
4.1 EXPERIMENTAL SETTINGS
Task. We focus on the driving task in this work. This is because driving is an important decision making problem with a huge social impact, where safety and training efficiency are critical. Since many researches on autonomous driving employ human in a real vehicle (Bojarski et al., 2016; Kelly et al., 2019), the human safety and human cognitive cost become practical challenges that limit the application of learning-based methods in industries. Therefore, the driving task is an ideal benchmark for the human-AI copilot paradigm.
Simulator. Considering the potential risks of employing human subjects in physical experiments, we benchmark different approaches in the driving simulator. We employ a lightweight driving simulator MetaDrive (Li et al., 2021), which preserves the capacity to evaluate the safety and generalizability in unseen environments. The simulator is implemented based on Panda3D (Goslin & Mine, 2004) and Bullet Engine that has high efficiency as well as accurate physics-based 3D kinetics. MetaDrive uses procedural generation to synthesize an unlimited number of driving maps for the split of training and test sets, which is useful to benchmark the generalization capability of different approaches in the context of safe driving. Some generated driving scenes are presented in Fig. 2. The simulator is also extremely efficient and flexible so that we can run the human-AI copilot experiment in real-time. Though we mainly describe the setting of MetaDrive in this section, we also experiment on CARLA (Dosovitskiy et al., 2017) simulator in Sec. 4.3.
Training Environment. In the simulator, the task for the agent is to steer the target vehicle with low-level control signal, namely acceleration, brake and steering, to reach the predefined destination and receive a success flag. The ratio of episodes where the agent successfully reaches the destination is called the success rate. To increase the difficulty of the task, we scatter obstacles randomly in each driving scene such as movable traffic vehicles, fixed traffic cones, and warning triangles.
The observation contains (1) the current states such as the steering, heading, velocity and relative distance to boundaries etc., (2) the navigation information that guides the vehicle toward the destination, and (3) the surrounding information encoded by a vector of 240 Lidar-like distance measures of the nearby vehicles.
Though HACO does not receive environmental reward during training, we provide reward function to train baseline methods and evaluate HACO in test time. The reward function contains a dense driving reward, speed reward and a sparse terminal reward. The driving reward measures the longitudinal movement toward destination. We also reward agent according to its velocity and give the sparse reward +20 when the agent arrives at the destination.
Each collision to the traffic vehicles or obstacles yields +1 environmental cost. Note that HACO can not access this cost during the training. This cost is used to train safe RL baselines as well as for testing the safety of trained policies. We term the episodic cost as safety violation which is the measurement on the safety of a policy.
We invite the human expert to supervise the real-time exploration of the learning agent with hands on the steering wheel, as shown in the Fig. 1B. When a dangerous situation is going to happen, the human takes over the vehicle by pressing the paddle besides the wheel and starts controlling the vehicle by steering the wheel and stepping the pedals.
Split of training and test sets. Different from the conventional RL setting where the agent is trained and tested in the same fixed environment, we focus on evaluating the generalization performance through testing the trained agents in separated test environments. We split the driving scenes into the training set and test set with 50 different scenes in each set. After each training iteration, we roll out the learning agent without guardian in the test environments and record success rate and safety violation given by the environment and present it in Table 1.
* During HACO training, in 8316 ± 497.90 steps out of the total 30K steps the human expert intervenes and overwrites the agent’s actions. The whole training takes about 50 minutes.
Implementation details. We conduct experiments on the driving simulator and implement algorithms using RLLib (Liang et al., 2018), an efficient distributed learning system. When training the baselines, we host 8 concurrent trials in an Nvidia GeForce RTX 2080 Ti GPU. Each trial consumes 2 CPUs with 8 parallel rollout workers. Except human-in-the-loop experiments, all baseline experiments are repeated 5 times with different random seeds. The main experiments of HACO is conducted on a local computer with an Nvidia GeForce RTX 2070 and repeat 3 times. The ablations and baseline human-in-the-loop experiments repeat once due to the limited human budget. One human subject participates in each experiment. In all tables and figures, we provide the standard deviation if the experiments are repeated multiple runs with different random seeds. Information about other hyper-parameters is given in the Appendix.
4.2 BASELINE COMPARISON
We compare our method to vanilla RL and Safe RL methods which inject the human intention and constraint through the pre-defined reward function and cost function. We test native RL methods, PPO (Schulman et al., 2017) and SAC (Haarnoja et al., 2018), with cost added to the reward as auxiliary negative reward, called reward shaping (RS). Three common safe RL baselines Constraint Policy Optimization (CPO) (Achiam et al., 2017), PPO-Lagrangian (Stooke et al., 2020), SACLagrangian (Ha et al., 2020) are evaluated.
Apart from the RL methods, we also generate a human demonstration dataset containing one-hour expert’s demonstrations where there are about 36K transitions in the training environments. For the high-quality demonstrations in the dataset, the success rate of the episodes reaches 98% and the safety violation is down to 0.16. Using this dataset, we evaluate passive IL method Behavior Cloning, active IL method GAIL (Ho & Ermon, 2016) and offline RL method CQL (Kumar et al., 2020). We also run the Human-Gated DAgger (HG-DAgger) (Kelly et al., 2019) and Intervention Weighted Regression (IWR) (Mandlekar et al., 2020) as the baselines of human-in-the-loop methods based on this dataset and the human-AI copilot workflow.
Training-time Safety. The training-time safety is measured by the total training safety violation, the total number of critical failures occurring in the training. Note that the environmental cost here is different from the human intervention cost in HACO. As illustrated in Table 1 and Fig. 3A, HACO achieves huge success in training time safety. Apart from the empirical results, we provide proof to show the training safety can be bound by the guardian in Appendix. Under the protection of the human expert, HACO yields only 30.14 total safety violations in the whole training process, two orders of magnitude better than other RL baselines, even though HACO does not access the environmental cost. IWR and HG-DAgger also achieve drastically lower training safety violations, showing the power of human-in-the-loop methods. The most competitive RL baseline SAC-RS, which achieves similar test success rate, causes averagely 2767.77 training safety violations which are much higher
than HACO. The active IL method GAIL also has significantly higher safety violations than HACO and its performance is unsatisfactory.
From the perspective of safety, we find that the reward shaping technique is inferior compared to the Lagrangian method, both for SAC and PPO variants. PPO causes more violations than SAC, probably due to the relatively lower sample efficiency and slower convergence speed.
Sample Efficiency and Human Cognitive Cost. The human-AI system is not only protected so well by the human, but achieves superior sample efficiency with limited data usage. As shown in Fig. 3A and Table 1, we find that HACO is an order of magnitude more efficient than RL baselines. HACO achieves 0.83 test success rate by merely interacting with the environment in the 30K steps, wherein only averagely 8,316 steps the human provides safe actions as demonstration. During nearly 50 minutes of human-AI copilot, there are only 27% steps that the human provides demonstrations.
Human-in-the-loop baselines IWR and HG-DAgger consume 50K steps of human budget and only IWR can achieve satisfactory success rate. By prioritizing samples from human intervention, IWR manages to learn key actions from human intervention to escape dangerous situations caused by the compounding error. Without re-weighting the human takeover data, HG-Dagger fails to learn from a few but important human demonstrations. The learning curves of these two methods can be found in the Appendix.
Unlike the success of HACO, all the learning-from-demonstration methods fail with the dataset containing 36K transitions. Compared to IL methods which optimize agents to imitate exact actions at each time step, HACO considers the learning on the trajectory basis. We incentivize the agent to choose an action that can bring potential return in future trajectory, instead of only mimicking the expert’s behaviors at each step. On the other hand, HACO gathers expert data in an online manner through human-AI copilot, which better mitigates the distributional shift severe in offline RL methods.
Learning Dynamics. The intervention minimization mechanism in HACO reduces human cognitive cost. As shown in Fig. 3B, the takeover rate gradually decreases in the course of learning. The curve of episodic intervention cost suggests that the human intervention frequency becomes lower and the similarity between agent’s action and human’s action increases. We also provide visualization of the learned proxy value function in the Appendix, showing that the learning scheme of HACO can effectively encode human preference into the proxy values.
4.3 ABLATION STUDY
Takeover Policy Analysis. We request the human subjects to try two intervention strategies. The first is to take over in a low frequency and produce a long trajectory at each intervention. In this way the intervention cost becomes sparse. The other strategy is to intervene more frequently and provide fragmented demonstrations. In Table 2(a), the experiment shows that the proposed HACO works better with dense human intervention signals. Agent trained with long trajectories achieves inferior success rate and episodic reward than agents trained with dense intervention signals.
Cosine Similarity Cost Function. As shown in Table 2(b), we replace the intervention cost function in Eq. 6 to a constant value +1 if human intervention happens. We find the agent learns to stay in the spawn points and does not move at all in test time. As discussed in Sec. 3.2, it is possible that the human intervenes in incorrect timing. This makes agent fail to identify how to drive correctly. Using the negative cosine similarity to measure the divergence between agent and human’s actions alleviates this phenomenon since the human intervention penalty is down-weighted when the agent provides action that adheres human intention.
Intervention Minimization. As shown in Table 2(c), when removing the intervention minimization mechanism, the agent drives directly toward the boundary. This is because the agent learns to abuse human expert to take over all the time, which increases proxy values but causes consistent out-ofthe-road failures in testing. This result shows the importance of intervention minimization.
CARLA Experiment. To test the generality of HACO, we run HACO in the CARLA simulator (Dosovitskiy et al., 2017). We use the top-down semantic view provided by CARLA as the input and a 3-layer CNN as the feature extractor for HACO and the PPO baseline. For PPO, the reward follows the setting described in CARLA and is based on the velocity and the completion of the road. We train HACO (with a human expert) and PPO in CARLA town 1 and report the test performance in CARLA town 2. Table 3 shows that the proposed HACO can be successfully deployed in the CARLA simulator with visual observation and achieve comparable results. Also, it can train the driving agent with a new CNN feature-extractor in 10 minutes with only 8,000 samples in the environment. The video is available at: https://decisionforce.github.io/HACO/.
5 CONCLUSION
We develop an efficient human-in-the-loop learning method, Human-AI Copilot Optimization (HACO), which trains agents from the human interventions and partial demonstrations. The method incorporates the human expert in the interaction between agent and environment to ensure safe and efficient exploration. The experiments on safe driving show that the proposed method achieves superior training-time safety, outperforming RL and IL baselines. Besides, it shows a high sample efficiency for rapid learning. The constrained optimization technique is used to prevent the agent from excessively exploiting the human expert, which also decreases the takeover frequency and saves valuable human budget.
One limitation of this work is that the trained agents behave conservatively compared to the agents from RL baselines. Aiming to ensure the training time safety of the copilot system, human expert typically slow the vehicle down to rescue it from risky situations. This makes the agent tend to drive slowly and exhibit behaviors such as frequent yielding in the intersection. In future work, we will explore the possibility of learning more sophisticated skills.
Acknowledgments This project was supported by the Centre for Perceptual and Interactive Intelligence (CPII) Ltd under InnoHK supported by the Innovation and Technology Commission.
ETHICS STATEMENT
The proposed Human-AI Copilot Optimization algorithm aims at developing a new human-friendly human-in-the-loop training framework. We successfully increase the level of automation after human-efficient training. We believe this work has a great positive social impact which advances the development of more intelligent AI systems that costs less human burdens.
We employ human subjects to participate in the experiments. Human subjects can stop the experiment if any discomfort happens. No human subjects were harmed in the experiments since we test in the driving simulator. The human subjects earn an hourly salary more than average in our community. Each experiment lasts near one hour. Human participants will rest at least three hours after one experiment. During training and data processing, no personal information is revealed in the collected dataset or the trained agents.
A MAIN THEOREM AND THE PROOF
In this section, we derive the upper bound of the discounted probability of failure of HACO, showing that we can bound the training safety with the guardian. Theorem 1 (Upper bound of training risk). The expected cumulative probability of failure Vπb of the behavior policy πb in HACO is bounded by the error rate of the human expert action , the error rate of the human expert intervention κ and the tolerance of the human expert K ′:
Vπb ≤ 1
1− γ [ + κ+
γ 2
1− γ K ′],
wherein K ′ = maxsK(s) = maxs ∫ a∈Ah(s) da ≥ 0 is called human expert tolerance.
The human expert tolerance K ′ will becomes larger, if human relieves its intervention and allows the agent to explore the environment more freely.
The proof is given as follows.
Notations. Before starting, we firstly recap and describe the notations. In HACO, a human subject copilots with the learning agent. The agent’s policy is πn, the human’s policy is πh. Both policies produces action in the bounded action space A ∈ R|A|. The human expert decides to intervention under certain state and agent’s action an. The human intervention is denoted by a Boolean function: I(s, a). The mixed behavior policy πb that produces the real actions applied to the environment is denoted as: πb(a|s) = πn(a|s)(1− I(s, a)) + πh(a|s)G(s), (9) wherein G(s) = ∫ a′∈A I(s, a ′)πn(a ′|s)da′ is a function which denotes the probability of choosing an action that will be rejected by the human.
Therefore, at a given state, we can split the action space into two parts: where intervention will happen or will not happen if the agent sample action in it. We denote the confident action space as:
Ah(s) = {a : I(a|s) is False}. (10) The confident action space contains the actions that will not be rejected by human expert at state s.
We also define the ground-truth indicator Cgt denoting whether the action will lead to unsafe state. This unsafe state is determined by the environment and is not revealed to learning algorithm:
Cgt(s, a) = { 1, if s′ = P(s′|s, a) is an unsafe state, 0, otherwise.
(11)
Therefore, at a given state s the step-wise probability of failure for arbitrary policy π is:
E a∼π(·|s)
Cgt(s, a) ∈ [0, 1]. (12)
Now we denote the cumulative discounted probability of failure as:
Vπ(st) = E τ∼π ∑ t′=t γt ′−tCgt(st′ , at′), (13)
which counts for the chance of entering dangerous states in current time step as well as in future trajectories deduced by the policy π. We use Vπh = Eτ∼πh Vπh(s0) to denote the expected cumulative discounted probability of failure of the human. Following the same definition as Vπh , we can also write the expected cumulative discounted probability of failure of the behavior policy as: Vπb = Eτ∼πb Vπb(s0) = Eπb ∑ t=0 γ tCgt(st, at).
Assumption. Now we introduce two important assumptions on the human expert.
Assumption 1 (Error rate of human action). For all states, the step-wise probability of that the human expert produces an unsafe action is bounded by a small value < 1:
E a∼πh(·|s)
Cgt(s, a) ≤ . (14)
Assumption 2 (Error rate of human intervention). For all states, the step-wise probability of that the human expert does not intervene when agent produces an unsafe action is bounded by a small value κ < 1: ∫
a∈A
[1− I(s, a)]Cgt(s, a)da = ∫
a∈Ah(s)
Cgt(s, a)da ≤ κ. (15)
These two assumptions does not impose any constrain on the structure of the human expert policy.
Lemmas. We propose several useful lemmas and the correspondent proofs, which are used in the main theorem.
Lemma 2 (The performance difference lemma).
Vπb = Vπh + 1
1− γ Es∼Pπb E a∼πb [Aπh(s, a)]. (16)
Here the Pπb means the states are subject to the marginal state distribution deduced by the behavior policy πb. Aπh(s, a) is the advantage of the expert in current state action pair: Aπh(s, a) = Cgt(s, a) + γVπh(s
′) − Vπh(s) and s′ = P(s, a) is the next state. This lemma is proposed and proved by Kakade & Langford (2002) and is useful to show the behavior policy’s safety. In the original proposition, the V and A represents the expected discounted return and advantage w.r.t. the reward, respectively. However, we replace the reward with the indicator Cgt so that the value function Vπb and Vπh presenting the expected cumulative failure probability.
Lemma 3. The cumulative probability of failure of the expert Vπh(s) is bounded for all state:
Vπh(s) ≤
1− γ
Proof. Following Assumption 1:
Vπh(st) = E πh [ ∞∑ t′=t γt ′−tCgt(st′ , at′)] = ∞∑ t′=t γt ′−t E πh [Cgt(st′ , at′)] ≤ ∞∑ t′=t γt ′−t = 1− γ (17)
Theorem. We introduce the main theorem of this work above, which shows that the training safety is related to the error rate on action and the error rate on intervention κ of the human expert. The proof is given as follows.
Proof. We firstly decompose the advantage by splitting the behavior policy:
E a∼πb(·|s) Aπh(s, a) = ∫ a∈A πb(a|s)Aπh(s, a)
= ∫ a∈A {πn(a|s)(1− I(s, a))Aπh(s, a) + πh(a|s)G(s)Aπh(s, a)}da
= ∫ a∈Ah(s) [πn(a|s)Aπh(s, a)]da+G(s) E a∼πh [Aπh(s, a)].
(18)
The second term is equal to zero according to the definition of advantage. We only need to compute the first term. We expand the advantage into detailed form, we have:
E a∼πb(·|s) Aπh(s, a) = ∫ a∈Ah(s) [πn(a|s)Aπh(s, a)]da
= ∫ a∈Ah(s) πn(a|s)[Cgt(s, a) + γVπh(s′)− Vπh(s)]da
= ∫ a∈Ah(s) π(a|s)Cgt(s, a)da
︸ ︷︷ ︸ (a)
+ γ ∫ a∈Ah(s) π(a|s)Vπh(s′)da
︸ ︷︷ ︸ (b)
− ∫
a∈Ah(s)
π(a|s)Vπh(s)da
︸ ︷︷ ︸ (c)
.
(19)
Following the Assumption 1, the term (a) can be bounded as:
∫ a∈Ah(s) π(a|s)Cgt(s, a)da ≤ ∫ a∈Ah(s) Cgt(s, a)da ≤ κ. (20)
Following the Lemma 3, the term (b) can be written as:
γ ∫ a∈Ah(s) π(a|s)Vπh(s′)da ≤ γ ∫ a∈Ah(s) Vπh(s ′)da ≤ γ 1− γ ∫ a∈Ah(s) da = γ 1− γ K(s), (21)
wherein K(s) = ∫ a∈Ah(s) da denoting the area of human-preferable region in the action space. It is a function related to the human expert and state.
The term (c) is always non-negative, so after applying the minus to term (c) the negative term will always be ≤ 0. Aggregating the upper bounds of three terms, we have the bound on the advantage:
E a∼πb
Aπh(s, a) ≤ κ+ γ
1− γ K(s) (22)
Now we put Eq. 22 as well as Lemma 3 into the performance difference lemma (Lemma 2), we have:
Vπb = Vπh + 1
1− γ Es∼Pπb E a∼πb [Aπh(s, a)]
≤
1− γ +
1
1− γ [κ+
γ
1− γ max s K(s)]]
= 1
1− γ [ + κ+
γ 2
1− γ K ′],
(23)
wherein K ′ = maxsK(s) = maxs ∫ a∈Ah(s) da ≥ 0 is correlated to the tolerance of the expert. If the human expert has higher tolerance then K ′ should be greater.
Now we have proved the upper bound of the discounted probability of failure for the behavior policy in our method.
B VISUALIZATION OF LEARNED PROXY VALUE FUNCTION
To understand how well the proxy value function learns, we visualize 4 common scenarios in 4 pairs of figures as shown above. The left sub-figure of each pair shows a top-down view of a driving scenario, where a sequence of snapshots of the control vehicle is plotted, showing its trajectory. The right sub-figure of each pair overlaps the heatmap of proxy values in the top-down image. We manually position the vehicle in different location in the map and query the policy to get action and run the proxy Q function to get the value Q(s, a). Region in red color indicates the proxy value is low if the agent locates there and vice versa.
In Fig. 4(a), the agent performs a lane change behavior to avoid potential collisions with a traffic vehicle which is merging into the middle lane. The region near the traffic vehicle has extremely low values and thus the agent has small probability to enter this area.
In Fig. 4(b), traffic cones spread in the left lane. The agent learns to avoid crashes and the proxy value heatmap shows a large region of low values.
As shown in the trajectory in Fig. 4(c), though the agent can choose to bypass the traffic vehicle in both left-hand side or right-hand side, it chooses the right-hand side. The heatmap shows that much higher proxy Q value is produced on right bypassing path compared to left path. This behavior resembles the preference of human who prefers right-hand side detour.
In addition, in some ares where paths boundary is ambiguous such as the intersection, the agent manages to learn a virtual boundary in the proxy Q space for efficiently passing these areas, as shown in the Fig. 4(d).
The proxy Q value distribution shown in this section not only explains the avoidance behaviors, but also serves as a good indicator for the learned human preference.
C DETAILS OF HUMAN-IN-THE-LOOP BASELINES
We benchmark the performance of two human-in-the-loop methods HG-DAgger (Kelly et al., 2019) and IWR (Mandlekar et al., 2020). Both methods require warming up through behavior cloning on a pre-collected dataset. In practice, we find that using 10K or 20K steps of human collected data is not enough to initialize the policy with basic driving skills. Therefore, we use the pre-collected human dataset containing 30K transitions to warm up the policies. After warming up, HG-DAgger and IWR then aggregate human intervention data to the training buffer and conduct behavior cloning again to update policy for 4 epochs. In each epoch the human-AI system collects 5000 transitions. The above figure shows the learning curves of IWR and HG-DAgger. As discussed in the main body of paper, we credit the success of IWR to the re-weighting of human intervention data, which is not emphasized in HG-DAgger.
D MORE ZOOM-IN PLOT OF THE LEARNING CURVES
The above figures present the zoomed in learning curves of RL baselines and HACO, showing the superior sample efficiency of HACO compared to RL baselines.
E HYPER-PARAMETERS
Table 4: HACO
Table 5: PPO/PPO-Lag
Hyper-parameter Value
Discounted Factor γ 0.99 τ for target network update 0.005 Learning Rate 0.0001 Environmental horizon T 1500 Steps before Learning start 10000
Cost Limit for SAC-Lag 1
BC iterations for CQL 200000 CQL Loss Temperature β 5 Min Q Weight Multiplier 0.2
Table 7: BC
Hyper-parameter Value
Dataset Size 36,000 SGD Batch Size 32 SGD Epoch 200000 Learning Rate 0.0001
Table 8: CPO
Table 10: HG-DAgger
Hyper-parameter Value
Initializing dataset size 30K Number of data aggregation epoch 4 Interactions per round 5000 SGD batch size 256 Learning rate 0.0004
Table 11: IWR
Hyper-parameter Value
Initializing dataset size 30K Number of data aggregation epoch 4 Interactions per round 5000 SGD batch size 256 Learning rate 0.0004 Re-weight data distribution True | 1. What is the focus and contribution of the paper regarding imitative driving policy learning?
2. What are the strengths of the proposed HACO algorithm, particularly in its simplicity, effectiveness, and experimental performance?
3. What are the weaknesses of the paper, such as concerns about overfitting or deviations from established methods like CQL?
4. Do you have any questions or suggestions regarding the presentation, analysis, or typos in the paper? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents HACO, a human-in-the-loop learning algorithm that aims to learn imitative driving policy while minimizing the number of human interventions. HACO builds on CQL and operates under the no-reward assumption. HACO learns a proxy action-value function by penalizing the policy’s action and maximizing during human interventions. It additionally adds an entropy term to encourage exploration. The policy trains by maximizing the proxy action-value, and penalizing an accumulative intervention cost, computed using the cosine difference between the human and the policy actions. HACO is evaluated in a closed-loop driving simulator. HACO outperforms the selected imitation and offline RL baseline and is on par with RL methods which have access to environment rewards. It is also orders of magnitude more sample efficient than standard RL methods.
Review
=== Strengths ===
The paper is well written and the presentation is easy to follow.
The overall idea and the human-in-the-loop setup are nice and suit very well with real-world applications.
The presented approach is simple and effective, and has strong experimental performance compared to IL and is on-par with RL, and does not require extra reward function. It is much more sample efficient than the common RL approaches.
The paper also presents a novel driving simulator with procedurally generated maps and active agents. The authors mention that the simulator will be released, which is a big plus.
=== Weaknesses ===
I like the analysis that justifies the design choice of equation 6, but I think the proposed solution is slightly overfitting to the environment (for example, steering which the environment uses), since in general action space does not necessarily form a metric space.
There is a typo in equation 3.
The authors mention the method builds upon CQL, however judging from equation 5 it does not seem to strictly follow CQL, but looks more like a standard Q learning loss. I would appreciate it if the authors can clarify this. |
ICLR | Title
Discovering Distinctive ``Semantics'' in Super-Resolution Networks
Abstract
Image super-resolution (SR) is a representative low-level vision problem. Although deep SR networks have achieved extraordinary success, we are still unaware of their working mechanisms. Specifically, whether SR networks can learn semantic information, or just perform complex mapping functions? What hinders SR networks from generalizing to real-world data? These questions not only raise our curiosity, but also influence SR network development. In this paper, we make the primary attempt to answer the above fundamental questions. After comprehensively analyzing the feature representations (via dimensionality reduction and visualization), we successfully discover the distinctive “semantics” in SR networks, i.e., deep degradation representations (DDR), which relate to image degradation instead of image content. We show that a well-trained deep SR network is naturally a good descriptor of degradation information. Our experiments also reveal two key factors (adversarial learning and global residual) that influence the extraction of such semantics. We further apply DDR in several interesting applications (such as distortion identification, blind SR and generalization evaluation) and achieve promising results, demonstrating the correctness and effectiveness of our findings.
1 INTRODUCTION
The emergence of deep convolutional neural network (CNN) has given birth to a large number of new solutions to low-level vision tasks (Dong et al., 2014; Zhang et al., 2017). Among these signs of progress, image super-resolution (SR) has enjoyed a great performance leap. Compared with traditional methods (e.g., interpolation (Keys, 1981) and sparse coding (Yang et al., 2008)), SR networks can achieve better performance with improved efficiency.
However, even if we have benefited a lot from the powerful CNNs, we have little knowledge about what happens in SR networks and what distinguishes them from traditional approaches on earth. Does the performance gain merely come from more complex mapping functions? Or is there anything different inside SR networks, like classification networks with discriminative capability? On the other hand, as a classic regression task, SR is expected to perform a continuous mapping from low-resolution (LR) to high-resolution (HR) images. It is generally a local operation without the consideration of the global context. But with the introduction of GAN-based models Ledig et al. (2017); Wang et al. (2018), more delicate SR textures can be generated. It seems that the network has learned some kind of semantic, which is beyond our common perception for regression tasks.
Then, we may raise the question: are there any “semantics” in SR networks? If yes, do these semantics have different definitions from those in classification networks? Existing literature cannot answer these questions, as there is little research on interpreting low-level vision deep models. Nevertheless, discovering the semantics in SR networks is of great importance. It can not only help us further understand the underlying working mechanisms, but also guide us to design better networks and evaluation algorithms.
In this study, we give affirmative answers to the above questions by unfolding the semantics hidden in super-resolution networks. Specifically, different from the artificially predefined semantics associated with object classes in high-level vision, semantics in SR networks are distinct in terms of image degradation instead of image content. Accordingly, we name such semantics deep degradation representations (DDR). More interestingly, such degradation-related semantics are spontaneously existing without any predefined labels. We reveal that a well-trained deep SR network is naturally a good descriptor of degradation information.
Notably, the semantics in this paper have different implications from those in high-level vision. Previously, researchers have disclosed the hierarchical nature of classification networks (Zeiler & Fergus, 2014; Gu et al., 2018). As the layer deepens, the learned features respond more to abstract high-level patterns (e.g., faces and legs), showing a stronger discriminability to object categories (see Fig. 4). However, similar research in low-level vision is absent, since there are no predefined semantic labels. In this paper, we reveal the differences in deep “semantics” between classification and SR networks, as illustrated in Fig. 1.
Our observation stems from a representative blind SR method – CinCGAN Yuan et al. (2018), and we further extend it to more common SR networks – SRResNet and SRGAN Ledig et al. (2017). We have also revealed more interesting phenomena to help interpret the semantics, including the analogy to classification networks and the influential factors for extracting DDR. Moreover, we improve the results of several tasks by exploiting DDR. We believe our findings could lay the groundwork for the interpretability of SR networks, and inspire more exploration of the mechanism of low-level vision deep models.
Contributions. 1) We have successfully discovered the “semantics” in SR networks, denoted as deep degradation representations (DDR). Through in-depth analysis, we also find that global residual learning and adversarial learning can facilitate the SR network to extract such degradation-related representations. 2) We reveal the differences in deep representations between classification and SR networks, for the first time. This further expands our knowledge of the deep representations of highand low-level vision models. 3) We exploit our findings to several fundamental tasks and achieve very appealing results, including distortion identification, blind SR and generalization evaluation.
2 RELATED WORK
Super-resolution. Super-resolution (SR) is a fundamental task in low-level vision, which aims to reconstruct the high-resolution (HR) image from the corresponding low-resolution (LR) counterpart. SRCNN (Dong et al., 2014) is the first proposed CNN-based method for SR. Since then, a large number of deep-learning-based methods have been developed (Dong et al., 2016; Lim et al., 2017; Zhang et al., 2018b; Ledig et al., 2017; Zhang et al., 2019). Generally, current CNN-based SR methods can be categorized into two groups. One is MSE-based method, which targets at minimizing the distortion (e.g., Mean Square Error) between the ground-truth HR image and super-resolved image to yield high PSNR values, such as SRCNN (Dong et al., 2014), VDSR (Kim et al., 2016), EDSR (Lim et al., 2017), RCAN (Zhang et al., 2018b), SAN (Dai et al., 2019), etc. The other is GAN-based method, which incorporates generative adversarial network (GAN) and perceptual loss (Johnson et al., 2016) to obtain perceptually pleasing results, such as SRGAN (Ledig et al., 2017),
Input CinCGAN (DIV2K-mild)
(a)
(b)
(c)
BM3D SRCNN (DIV2K-mild)
DIV2K-mild
DIV2K-noise
Hollywood
Figure 2: Different degraded input images and their corresponding outputs produced by CinCGAN (Yuan et al., 2018), BM3D (Dabov et al., 2007), and SRCNN (Dong et al., 2014). CinCGAN (Yuan et al., 2018) is trained on DIV2K-mild dataset in an unpaired manner. If the input image conforms to the training data distribution, CinCGAN will generate better restoration results than BM3D (a). Otherwise, it tends to ignore the unseen degradation types (b)&(c). On the other hand, the traditional method BM3D (Dabov et al., 2007) has stable performance and similar denoising effects on all input images, regardless of the input degradation types. Zoom in for the best view.
ESRGAN (Wang et al., 2018), RankSRGAN (Zhang et al., 2019), SROBB (Rad et al., 2019). Recently, blind SR has attracted more and more attention (Gu et al., 2019; Bell-Kligler et al., 2019; Luo et al., 2020; Wang et al., 2021),which aims to solve SR with unknown real-world degradation. A comprehensive survey for blind SR is newly proposed (Liu et al., 2021), which summarizes existing methods. We regard SR as a representative research object and study its deep semantic representations. It can also draw inspirations on other low-level vision tasks.
Network interpretability. At present, most existing works on neural network interpretability focus on high-level vision tasks, especially for image classification. Zhang et al. (Zhang et al., 2020) systematically reviewed existing literature on network interpretability and proposed a novel taxonomy to categorize them. Here we only discuss several classic works. By adopting deconvolutional networks (Zeiler et al., 2010), Zeiler et al. (Zeiler & Fergus, 2014) projected the downsampled lowresolution feature activations back to the input pixel space, and then performed a sensitivity analysis to reveal which parts of the image are important for classification. Simonyan et al. (Simonyan et al., 2013) generated a saliency map from the gradients through a single backpropagation pass. Based on class activation maps (CAM) (Zhou et al., 2016), Selvaraju et al. (Selvaraju et al., 2017) proposed Grad-CAM (Gradient-weighted CAM) to produce a coarse-grained attribution map of the important regions in the image, which was broadly applicable to any CNN-based architecture. For more information about the network interpretability literature, please refer to the survey paper (Zhang et al., 2020). However, for low-level vision tasks, similar researches are rare. Recently, the local attribution map (LAM) (Gu & Dong, 2021) has been proposed to interpret super-resolution networks, which can be used to localize the input features that influenced the network outputs. Besides, Wang et al. (Wang et al., 2020b) presented a pioneer work that bridges the representation relationship between high- and low-level vision. They learned the mapping between deep representations of lowand high-quality images, and leveraged it as a deep degradation prior (DDP) for low-quality image classification. Inspired by these previous works, we interpret SR networks from another new perspective. We dive into their deep feature representations, and discover the “semantics” of SR networks. More background knowledge is described in the supplementary file.
3 MOTIVATION
To begin with, we present an interesting phenomenon, which drives us to start exploring the deep representations of SR networks. It is well known that SR networks are superior to traditional methods in specific scenarios, but are inferior in generalization ability. In blind SR, the degradation types of the input test images are unknown. For traditional methods, they treat different images equally without distinction of degradation types, thus their performance is generally stable and predictable. How about the SR networks, especially those designed for blind SR?
CinCGAN (Yuan et al., 2018) is a representative solution for real-world SR without paired training data. It maps a degraded LR to its clean version using data distribution learning before conducting SR operation. However, we find that it still has a limited application scope even if CinCGAN is developed for blind settings. If the degradation of the input image is not included in the training data, CinCGAN will fail to transfer the degraded input to a clean one. More interestingly, instead of producing extra artifacts in the image, it seems that CinCGAN does not process the input image and retains all the original defects. Readers can refer to Fig. 2 for an illustration, where CinCGAN performs well on the testing image of the DIV2K-mild dataset (same distribution as its training data), but produces unsatisfactory results for other different degradation types. In other words, the network seems to figure out the specific degradation types within its training data distribution, and distribution mismatch may make the network “turn off” its ability. This makes the performance of CinCGAN unstable and unpredictable. For comparison, we process the above three types of degraded images by a traditional denoising method BM3D (Dabov et al., 2007) 1. The visual results show that BM3D has an obvious and stable denoising performance for all different degradation types. Although the results of BM3D may be mediocre (the image textures are largely over-smoothed), it does take effect on every input image. This observation reveals a significant discrepancy between traditional methods and SR networks.
The above interesting phenomenon indicates that the deep network has learned more than a regression function, since it demonstrates the ability to distinguish among different degradation types. Inspired by this observation, we try to find any semantics hidden in SR networks.
4 DIVING INTO THE DEEP DEGRADATION REPRESENTATIONS
4.1 DISCRIMINABILITY OF DEEP REPRESENTATIONS IN DEEP SR NETWORKS
Feature projection and visualization. Since the final outputs are always derived from features in CNN layers, we start the exploration with feature maps, especially the deep ones potentially with more global and abstract information. To interpret the deep features of CNN, one common and rational way is to convert the high-dimensional CNN feature maps into lower-dimensional datapoints that can be visualized in a scatterplot. Afterwards, one can intuitively understand the data structures and manifolds. Specifically, we adopt t-Distributed Stochastic Neighbor Embedding (t-SNE) (Van der Maaten & Hinton, 2008) for dimensionality reduction. This algorithm is commonly used in manifold learning, and it has been successfully applied in previous works (Donahue et al., 2014; Mnih et al., 2015; Wen et al., 2016; Zahavy et al., 2016; Veličković et al., 2017; Wang et al., 2020b; Huang et al., 2020) for feature projection and visualization. In our experiments, we first reduce the dimensionality of feature maps to a reasonable amount (50 in this paper) using PCA (Hotelling, 1933), then apply t-SNE to project the 50-dimensional representation to two-dimensional space, after which the results are visualized in a scatterplot. Furthermore, we also introduce CHI (Caliński & Harabasz, 1974) score to quantitatively evaluate the distributions of visualized datapoints. The CHI score is higher when clusters are well separated, which indicates stronger semantic discriminability.
What do the deep features of SR networks represent? As discussed in Sec.3, since CinCGAN performs differently on various degradations, we compare the features generated from three testing datasets: 1) DIV2K-mild: training and testing data used in CinCGAN, which are synthesized
1Note that BM3D is a denoising method while CinCGAN is able to upsample the resolution of the input image. Thus, after applying BM3D, we apply bicubic interpolation to unify the resolution of the output image. This is reasonable as we only evaluate their denoising effects.
from DIV2K (Agustsson & Timofte, 2017) dataset, containing noise, blur, pixel shifting and other degradations. 2) DIV2K-noise20: add Gaussian noise (σ = 20) to DIV2K set. 3) Hollywood100: 100 images selected from Hollywood dataset (Laptev et al., 2008), containing real-world old film degradations. Each test dataset includes 100 images.
As shown in Fig. 3(a), there is a strong feature discriminability for various degradations. Images with aligned contents but different degradation types are still separated into different clusters. 2 This phenomenon conforms to our observation that CinCGAN does treat various input degradations in different ways. It naturally reveals the “semantics” of deep representations in CinCGAN, which are closely related to the degradation types rather than the image content. For comparison, we may wonder whether traditional methods have similar behaviors (or ”semantics”). However, our feature analysis method can only work for deep models, which contain hierarchical feature maps. It is acknowledged that the simplest network – SRCNN can be analogous to a sparse-coding-based method, thus we can use SRCNN to shed light on the behaviors of traditional methods. We train an SRCNN3 with the same data as CinCGAN, and visualize the feature representations of the last layer in Fig. 3(b). It is obvious that different degradations cannot be clearly separated. This phenomenon is completely different from CinCGAN. We can conjecture that the degradation-related semantics only exist in deep models, not traditional methods or shallow networks. More analysis on shallow networks can be found in the supplementary file.
From CinCGAN to Generic SRGAN. Notably, the training of CinCGAN involves degraded images (DIV2K-mild). It actually performs simultaneous restoration and SR. We also wonder how this kind of degradation-related semantics manifests in classical SR networks (without exposure to other degradation types except for downsampling). Therefore, we adopt a generic GAN-based SR network SRGAN (Ledig et al., 2017; Wang et al., 2018) to conduct the visualization experiment. SRGAN is trained with DIV2K dataset (Agustsson & Timofte, 2017) with only bicubic-downsampled LR images. According to the common degradation modelling in low-level vision, we use three datasets with different degradation types for testing: 1) DIV2K-clean: the original DIV2K validation set containing only bicubic downsampling degradation, which conforms to the training data distribution. 2) DIV2K-blur: introduce blurring degradation with Gaussian blur kernel on the DIV2K-clean set. The kernel width is randomly sampled from [2, 4] for each image and the kernel size is fixed to 15×15. 3) DIV2K-noise: add Gaussian noises to the DIV2K-clean set. The noise level is randomly sampled from [5, 30] for each image. These three testing datasets are aligned in image content but different in degradation types.
As shown in Fig.3(d), a clustering trend similar to CinCGAN is clearly demonstrated. This provides stronger evidence for the existence of degradation-related semantics. Even if the three testing sets share the same content, they are still separated into distinct clusters according to the degradation types. In the supplementary file, similar phenomena are observed with other network structures. Note again, shallow SRCNN does not have such feature discriminability (see Fig.3(c)).
There, we successfully find the semantics hidden in deep SR networks. They are perceivable to humans when visualized in low-dimensional space. Specifically, semantics in deep SR networks are in terms of degradation types regardless of the image contents. Simply but vividly, we name this kind of semantics as deep degradation representations (DDR).
Is DDR a natural and trivial observation? No, there are three reasons. First, DDR has never been discussed before. The function of deep SR networks is beyond simple regression. The learned deep features can spontaneously characterize the image degradations, indicating that a well-trained deep SR network is naturally a good descriptor of degradation information. Note again that the deep SR networks have not observed any blurry or noisy data during training, but still have the discriminative ability on different degradations. Second, DDR in SR is not simply caused by different input patterns. We find that different networks will learn different semantic representations. For example, in Sec. 4.2, we reveal the differences in the learned representations between classification and SR Networks. In Sec. 4.3, we show that not all SR network structures can easily obtain DDR. DDR does not exist in specific cases and shallow networks. Third, DDR has important applications and inspirations. It can not only expand our understanding of the underlying mechanisms of low-level
2Note that the class labels in the scatterplots are only used to assign a color/symbol to the datapoints for better visualization.
3We use the same architecture as the original paper Dong et al. (2014) and add global residual for better visualization.
vision models, but also help promote the development of other tasks. In Sec. 5, we apply DDR to several fundamental tasks and achieve appealing results, implying the great potential of DDR.
4.2 DIFFERENCES IN SEMANTICS BETWEEN CLASSIFICATION AND SR NETWORKS
In the high-level vision, classification is one of the most representative tasks, where artificially predefined semantic labels on object classes are given as supervision. We choose ResNet18 (He et al., 2016) as the classification backbone and conduct experiments on CIFAR10 dataset (Krizhevsky et al., 2009). We extract the forward features of each input testing image4 at different network layers, as described in Fig. 3(e)-a.
Fig. 4 shows that as the network deepens, the extracted feature representations produce obvious discriminative clusters, i.e., the learned features are increasingly becoming semantically discriminative. Such discriminative semantics in classification networks are coherent with the artificially predefined labels. This is an intuitive and natural observation, on which lots of representation and discriminative learning methods are based (Wen et al., 2016; Oord et al., 2018; Lee et al., 2019; Wang et al., 2020b).
Further, we add blur and noise degradation to the CIFAR10 test images, and then investigate the feature representations of classification and SR networks. Note that no degradation is added to the training data. As shown in Fig. 5, after adding degradations to the test data, the deep representations obtained by the classification network (ResNet18) are still clustered by object categories, indicating that the features focus more on high-level object class information. On the contrary, the deep representations obtained by SR networks (SRResNet and SRGAN) are clustered with regard to degradation types. The features of the same object category are not clustered together, while those of the same degradation type are clustered together, showing different “semantic” discriminability. This phenomenon intuitively illustrates the differences in the deep semantic representations between SR and classification networks, i.e., degradation-related semantics and content-related semantics. More interestingly, the “semantics” in SR networks exists naturally, because the SR networks only see clean data without any input or labelled degradation information.
4.3 HOW DO GLOBAL RESIDUAL AND ADVERSARIAL LEARNING AFFECT THE DEEP REPRESENTATIONS?
Previously, we have elaborated on the deep degradation representations in CinCGAN, SRGAN and SRResNet. Nevertheless, we further discover that no arbitrary SR network structure has such a property. To be specific, we find two crucial factors that can influence the learned representations: i) image global residual (GR), and ii) generative adversarial learning (GAN).
4For efficiency, we selected 100 testing images of each category (1000 images in total).
Global Residual. We train two SRResNet networks – SRResNet (with global residual) and SRResNet-woGR (without global residual), as shown in Fig. 3. The two architectures are both common in practice (Kim et al., 2016; Shi et al., 2016). DIV2K (Agustsson & Timofte, 2017) dataset is used for training, where the LR images are bicubic-downsampled and clean. Readers can refer to the supplementary file for more details. After testing, the feature visualization analysis is shown in Fig. 6.
The results show that for MSE-based SR method, GR is essential for producing discriminative representations on degradation types. The features in “ResBlock16” of SRResNet have shown distinct discriminability, where the clean, blur, and noise data are clustered separately. On the contrary, SRResNet-woGR shows no discriminability even in deep layers. This phenomenon reveals that GR significantly impacts the learned feature representations. It is inferred that learning the global residual could remove most of the content information and make the network concentrate more on the contained degradation. This claim is also corroborated by visualizing the feature maps in the supplementary file.
Adversarial Learning. MSE-based and GAN-based methods are currently two prevailing trends in CNN-based SR methods. Previous studies only reveal that the output images of MSE-based and GAN-based methods are different, but the differences between their feature representations are rarely discussed. Since their learning mechanisms are quite different, will there be a discrepancy in their deep feature representations? We directly adopt SRResNet and SRResNet-woGR as generators. Consequently, we build two corresponding GAN-based models, namely SRGAN and SRGAN-woGR. After training, we perform the same test and analysis process mentioned earlier.
The results show that the deep features are bound to be discriminative to degradation types for the GAN-based method, whether there is GR or not. As shown in Fig. 7(d)(h), the deep representations in “ResBlock16” of SRGAN-woGR have already been clustered according to different degradation types. This suggests that the learned deep representations of MSE-based method and GAN-based method are dissimilar. Adversarial learning can help the network learn more informative features for distinguishing image degradation rather than image content.
4.4 HOW DOES DDR EVOLVE THROUGH THE TRAINING PROCESS?
We also reveal the relationship between the model performance and DDR discriminability. We select SRResNet models with different training iterations for testing. We report the model performance
on DIV2K-clean validation dataset and calculate the CHI scores to evaluate its discriminability with clean, blur and noise data. As shown in Fig. 8, as the training process goes, the performance of the model is improved, while the feature discriminability for degradation is also enhanced. From random initialization to 700k iterations, the CHI score increases significantly from 0.00 to 591.68, while the PSNR value improves by 2.87dB (Due to GR, the initial PSNR value is relatively high). The training data only include clean LR images, but the trained model has the ability to discriminate unseen degradation types. This clearly implies that a well-trained deep SR network is naturally a good descriptor of degradation information.
4.5 FURTHER DISCUSSION ON THE CAUSES OF DDR PHENOMENON
In the previous sections, we reveal several important factors that promote the manifestation of DDR phenomenon, including global residual, adversarial learning (Sec. 4.3) and training iterations (Sec. 4.4). Based on the above findings and more visualization results, we can analyze the causes of DDR more deeply. We visualize the feature maps of SRResNet-wGR, SRResNet-woGR, SRGAN-wGR and SRGAN-woGR on test images with different degradations in the Appendix.
The DDR phenomenon is mainly introduced by overfitting the degradation in the training data. Specifically, since the training data (DIV2K-clean) do not contain extra degradations, the trained SR network lacks the ability to deal with the unseen degradations. When feeding images with degradations (e.g., noise and blur), it will produce features with unprocessed noises or blurring. These patterned features naturally show a strong discriminability between different degradations. As for GR, models with GR produce features that contain less components of original content information. GR can help remove the redundant image content information and make the network concentrate more on degradation-related information. GAN training also enhances the high-frequency degradation information. Besides, prolonging the training iterations and deepening the network depth will make the network further overfit to the training data.
4.6 WHY SR NETWORKS CAN HARDLY GENERALIZE TO UNSEEN DEGRADATIONS?
Classical SR models (Dong et al., 2014; Lim et al., 2017) assume that the input LR images are generated by fixed downsampling kernel (e.g., bicubic). However, it is difficult to apply such simple SR models to real scenarios with unknown degradations. We claim that SR and restoration networks learn to overfit the distribution of degradations, rather than the distribution of natural clean images.
To verify our statements, we compare the representations between SRGAN-wGR models trained on clean data and clean+noise data, respectively. As presented in Fig. 9, if the model is trained only on clean LR data, the deep representations show strong discriminability to clean and noise data. In contrast, if the model sees noise data during training, such discriminability diminishes. The model will become more robust to more degradation types, as the distributions of the deep representations become unanimous. In summary, to improve the model generalization for various degradations, we need to diminish the feature discriminability to degradations. Adding more degraded data into training is a plausible way to enhance the generalization.
5 APPLICATIONS AND INSPIRATIONS
Image Distortion Identification Using DDR Features. Image distortion identification (Liang et al., 2020) is an important subsidiary pretreatment for many image processing systems, especially for image quality assessment (IQA). It aims to recognize the distortion type from the distorted images, so as to facilitate the downstream tasks (Mittal et al., 2012a; Gu et al., 2019; Liang et al., 2020). Previous methods usually resort to design handcrafted features that can distinguish different degradation types (Mittal et al., 2012a;b) or train a classification model via supervised learning (Kang et al., 2014; Bosse et al., 2017; Liang et al., 2020). Since DDR is related to image degradation, it can naturally be used as an excellent prior feature for image distortion identification. To obtain DDR, we do not need any degradation information but only a well-trained SR model (train on clean data). Following BRISQUE (Mittal et al., 2012a), we adopt the deep representations of SRGAN as input features (using PCA to reduce the original features to a 120-dimensional vector), and then use linear SVM to classify the degradation types of LIVE dataset (Sheikh et al., 2006). As shown in Tab. 1, compared with BRISQUE and MLLNet (Liang et al., 2020), DDR features achieve excellent results on recognizing different distortion types. More inspiringly, DDR is not obtained by any distortion-related supervision.
Blind SR with DDR Guidance. To super-resolve real images with unknown degradations, many blind SR methods resort to estimating and utilising the degradation information. For instance, IKC (Gu et al., 2019) iteratively corrects the estimated blur kernel, and DASR (Wang et al., 2021) implicitly learns the degradation representations by contrastive learning. Based on the findings of DDR, we adopt a trained SRGAN model to extract degradation embedding to promote blind SR models. RRDBNet (Wang et al., 2018) is adopted as the backbone. The DDR embedding is injected into each RRDB module by the StyleMod Karras et al. (2020) (see Fig. 10). The training data are described in Tab. 2, e.g., “b+n” means that the training data include blur and noise images. DDR guidance can help improve the model performance. Fig. 11 reveals that DDR guidance can make the deep features become more homogeneous (CHI scores drop from 14.04 to 4.95).
6 CONCLUSIONS
In this paper, we discover the deep degradation representations in deep SR networks, which are different from high-level vision networks. We demonstrate that a well-trained deep SR network is naturally a good descriptor of degradation information. We reveal the differences in deep representations between classification and SR networks. We draw a series of interesting observations on the intrinsic features of deep SR networks, such as the effects of global residual and adversarial learning. Further, we apply DDR to several fundamental tasks and achieve appealing results. The exploration on DDR is of great significance and inspiration for relevant work.
A APPENDIX
A.1 BACKGROUND
Since the emergence of deep convolutional neural network (CNN), a large number of computer vision tasks have been drastically promoted, including high-level vision tasks such as image classification Russakovsky et al. (2015); Simonyan & Zisserman (2015); He et al. (2016); Huang et al. (2017); Hu et al. (2018), object localization Ren et al. (2015); He et al. (2017); Redmon et al. (2016) and semantic segmentation Long et al. (2015); Badrinarayanan et al. (2017); Chen et al. (2017); Wang et al. (2020a), as well as low-level vision tasks such as image super-resolution Dong et al. (2014); Ledig et al. (2017); Wang et al. (2018); Zhang et al. (2019); Dai et al. (2019), denoising Zhang et al. (2017; 2018a); Gu et al. (2019); Quan et al. (2020), dehazing Cai et al. (2016); Zhang & Patel (2018); Dong et al. (2020); Deng et al. (2020a), etc. However, an interesting phenomenon is that even if we have successfully applied CNNs to many tasks, yet we still do not have a thorough understanding of its intrinsic working mechanism.
To better understand the behaviors of CNN, many efforts have been put in the neural network interpretability for high-level vision Simonyan et al. (2013); Samek et al. (2017); Zeiler & Fergus (2014); Selvaraju et al. (2017); Montavon et al. (2018); Karpathy et al. (2015); Mahendran & Vedaldi (2016); Zhang et al. (2020); Adebayo et al. (2018). Most of them attempt to interpret the CNN decisions by visualization techniques, such as visualizing the intermediate feature maps (or saliency maps and class activation maps) Simonyan et al. (2013); Zeiler & Fergus (2014); Adebayo et al. (2018); Zhou et al. (2016); Selvaraju et al. (2017), computing the class notion images which maximize the class score Simonyan et al. (2013), or projecting feature representations Wen et al. (2016); Wang et al. (2020b); Zhu et al. (2018); Huang et al. (2020). For high-level vision tasks, especially image classification, researchers have established a set of techniques for interpreting deep models and have built up a preliminary understanding of CNN behaviors Gu et al. (2018). One representative work is done by Zeiler et al. Zeiler & Fergus (2014), who reveal the hierarchical nature of CNN by visualizing and interpreting the feature maps: the shallow layers respond to low-level features such as corners, curves and other edge/color conjunctions; the middle layers capture more complex texture combinations; the deeper layers are learned to encode more abstract and class-specific patterns, e.g., faces and legs. These patterns can be well interpreted by human perception and help partially explain the CNN decisions for high-level vision tasks.
As for low-level vision tasks, however, similar research work is absent. The possible reasons are as follows. In high-level vision tasks, there are usually artificially predefined semantic labels/categories. Thus, we can intuitively associate feature representations with these labels. Nevertheless, in low-level vision tasks, there is no explicit predefined semantics, making it hard to map the representations into a domain that the human can make sense of. Further, high-level vision usually performs classification in a discrete target domain with distinct categories, while low-level vision aims to solve a regression problem with continuous output values. Hence, without the guidance of predefined category semantics, it seems not so straightforward to interpret low-level vision networks.
In this paper, we take super-resolution (SR), one of the most representative tasks in low-level vision, as research object. Previously, it is generally thought that the features extracted from the SR network have no specific “semantic” information, and the network simply learns some complex non-linear functions to model the relations between network input and output. Are CNN features SR networks really in lack of any semantics? Can we find any kind of “semantics” in SR networks? In this paper, we aim to give an answer to these questions. We reveal that there are semantics existing in SR networks. We first discover and interpret the “semantics” of deep representations in SR networks. But different from high-level vision networks, such semantics relate to the image degradation types and degrees. Accordingly, we designate the deep semantic representations in SR networks as deep degradation representations (DDR).
A.2 LIMITATIONS
In this paper, we only explore the deep representations of SR networks. Other low-level vision networks are also worth exploring. We apply DDR to three tasks without too elaborate design in the application parts. For blind SR, we make a simple attempt to improve the model performance. The design is not optimal. We believe that there should be a more efficient and effective way to utilize DDR. For generalization evaluation, DDR can only evaluate the model generalization under constrained conditions. It shows the possibility of designing a generalization evaluation metric, but there is still a long way to realize this goal.
A.3 DEEP REPRESENTATIONS OF REAL-WORLD IMAGES
In the main paper, we mainly conduct experiments on synthetic degradations. The difficulty of realworld dataset is that it is hard to keep the content the same but change the degradations. If we simply use two real-world datasets which contains different contents and different degradations, it is hard to say whether the feature discriminability is targeted at image content or at image degradation. Hence, synthetic data at least can control the variables.
In addition, we find a plausible real-world dataset Real-City100, which is proposed in paper Cameral SR. The authors use iPhoneX and NikonD5500 devices to capture controllable images. By adjusting the cameral focal length, each camera captures paired images with the same content but different resolutions. The low-resolution images contain real-world degradations such as real noise and real
blur. We test SRGAN on this dataset and obtain corresponding visualization results, as shown in 12. It can be seen that the deep representations of SRGAN can still distinguish among different degradations across different devices.
A.4 CLASSIFICATION VS. SUPER-RESOLUTION
A.4.1 FORMULATION
Classification. Classification aims to categorize an input image X into a discrete object class:
Ŷ = GCL(X), (1)
where GCL represents the classification network, and Ŷ ∈ RC is the predicted probability vector indicating which of the C categoriesX belongs to. In practice, cross-entropy loss is usually adopted to train the classification network:
CE(Y, Ŷ ) = − C∑ i=1 yilogŷi, (2)
where Y ∈ RC is a one-hot vector representing the ground-truth class label. ŷi is the i-th row element of Ŷ , indicating the predicted probability that X belongs to the i-th class.
Super-resolution. A general image degradation process can be model as follows: X = (Y ⊗ k) ↓s +n, (3)
where Y is the high-resolution (HR) image and ⊗ denotes the convolution operation. X is the degraded high-resolution (LR) image. There are three types of degradation in this model: blur kernel k, downsampling operation ↓s and additive noise n. Hence, super-resolution can be regarded as a superset of other restoration tasks like denoising and deblurring.
Super-resolution (SR) is the inverse problem of Equ. (3). Given the input LR image X ∈ RM×N , the super-resolution network attempts to produce its HR version:
Ŷ = GSR(X), (4)
where GSR represents the super-resolution network, Ŷ ∈ RsM×sN is the predicted HR image and s is the upscaling factor. This procedure can be regarded as a typical regression task. At present, there are two groups of method: MSE-based and GAN-based methods. The former one treats SR as a reconstruction problem, which utilizes pixel-wise loss such as L2 loss to achieve high PSNR values.
L2(Y, Ŷ ) = 1
r2NM rN∑ i=1 rM∑ j=1 ‖Yi,j − Ŷi,j‖22. (5)
This is the most widely used loss function in many image restoration tasks Dong et al. (2014); Lim et al. (2017); Zhang et al. (2018b;a); Cai et al. (2016); He et al. (2020). However, such loss tends to produce over-smoothed images. To generate photo-realistic SR results, the latter method incorporates adversarial learning and perceptual loss to benefit better visual perception. The optimization is expressed as following min-max problem:
min θGSR max θDSR EY∼pHR [logDSR(Y )]
+ EX∼pLR [log(1−DSR(GSR(X)))]. (6)
In such adversarial learning, a discriminatorDSR is introduced to distinguish super-resolved images from real HR images. Then, the generator loss is defined as:
LG = − logDSR(GSR(X)). (7)
From the formulation, we can clearly see that image classification and image super-resolution represent two typical tasks in machine learning: classification and regression. The output of the classification task is discrete, while the output of the regression task is continuous.
A.4.2 ARCHITECTURES
Due to the different output types, the CNN architectures of classification and super-resolution networks also differ. Generally, classification networks often contain multiple downsampling layers (e.g., pooling and strided convolution) to gradually reduce the spatial resolution of feature maps. After several convolutional and downsampling layers, there may be one or more fully-connected layers to aggregate global semantic information and generate a vector containing C elements. For the output layer, the SoftMax operator is frequently used to normalize the previously obtained vector into a probabilistic representation. Some renowned classification network structures include AlexNet Krizhevsky et al. (2012), VGG Simonyan & Zisserman (2015), ResNet He et al. (2016), InceptionNet Szegedy et al. (2015); Ioffe & Szegedy (2015); Szegedy et al. (2017), DenseNet Huang et al. (2017), SENetBadrinarayanan et al. (2017), etc.
Unlike classification networks, super-resolution networks usually do not rely on downsampling layers, but upsampling layers (e.g., bilinear upsampling, transposed convolution Zeiler et al. (2010) or subpixel convolution Shi et al. (2016)). Thus, the spatial resolution of feature maps would increase. Another difference is that the output of the SR network is a three-channel image, rather than an abstract probability vector. The well-known SR network structures include SRCNN Dong et al. (2014), FSRCNN Dong et al. (2016), SRResNet Ledig et al. (2017), RDN Zhang et al. (2018c), RCAN Zhang et al. (2018b), etc. An intuitive comparison of classification and SR networks in CNN architecture is shown in Fig. 18. We can notice that one is gradually downsampling, and the other is gradually upsampling, which displays the discrepancy between high-level vision and low-level vision tasks in structure designing.
Although there are several important architectural differences, classification networks and SR networks can share and adopt some proven effective building modules, like skip connection He et al. (2016); Lim et al. (2017) and attention mechanismHu et al. (2018); Zhang et al. (2018b).
A.5 IMPLEMENTATION DETAILS
In the main paper, we conduct experiments on ResNet18 He et al. (2016) and SRResNet/SRGAN Ledig et al. (2017). We elaborate more details on the network structures and training settings here.
For ResNet18, we directly adopt the network structure depicted in He et al. (2016). Cross-entropy loss (Eq. 2) is used as the loss function. The learning rate is initialized to 0.1 and decreased with a cosine annealing strategy. We apply SGD optimizer with weight decay 5×10−4. The trained model yields an accuracy of 92.86% on CIFAR10 testing set which consists of 10, 000 images.
For SRResNet-wGR/SRResNet-woGR, we stack 16 residual blocks (RB) as shown in Fig. 3 of the main paper. The residual block is the same as depicted in Wang et al. (2018), in which all the BN layers are removed. Two Pixel-shuffle layers Shi et al. (2016) are utilized to conduct upsampling in the network, while the global residual branch is upsampled by bilinear interpolation. L1 loss is adopted as the loss function. The learning rate is initialized to 2 × 10−4 and is halved at [100k, 300k, 500k, 600k] iterations. A total of 600, 000 iterations are executed.
For SRGAN-wGR/SRGAN-woGR, the generator is the same as SRResNet-wGR/SRResNet-woGR. The discriminator is designed as in Ledig et al. (2017). Adversarial loss (Eq. 7) and perceptual loss Johnson et al. (2016) are combined as the loss functions, which are kept the same as in Ledig et al. (2017). The learning rate of both generator and discriminator is initialized to 1×10−4 and is halved at [50k, 100k, 200k, 300k] iterations. A total of 600, 000 iterations are executed. For all the superresolution networks, we apply Adam optimizer Kingma & Ba (2014) with β1 = 0.9 and β2 = 0.99. All the training LR patches are of size 128 × 128. When testing, 32 × 32 patches are fed into the networks to obtain deep features. In practice, we find that the patch size has little effect on revealing the deep degradation representations. All above models are trained on PyTorch platform with GeForce RTX 2080 Ti GPUs.
For the experiment of distortion identification, we use the aforementioned trained models to conduct inferencing on the LIVE dataset Sheikh et al. (2006). We crop the central 96 × 96 patch of each image to feed into the SR networks and obtain the corresponding deep representations. Then, the deep representations of each image are reduced to 120-dimensional vector using PCA. Afterwards, the linear SVM is adopted as the classification tail. In practice, we find that the vector dimension can be even larger for better performance. Notably, unlike previous methods, the features here are not trained on any degradation related labels or signals. The SR networks are only trained using clean data. However, the deep representations can be excellent prior features for recognizing various distortion types. This is of great importance and very encouraging.
A.6 DEFINITIONS OF WD, BD AND CHI
In Sec. 3.1 of the main paper, we describe the adopted analysis method on deep feature representations. Many other literatures also have adopted similar approaches to interpret and visualize the deep models, such as Graph Attention Network Veličković et al. (2017), Recurrent Networks Karpathy et al. (2015), Deep Q-Network Zahavy et al. (2016) and Neural Models in NLP Li et al. (2015). Most aforementioned researches adopt t-SNE as a qualitative analysis technique. To better illustrate and quantitatively measure the semantic discriminability of deep feature representations, we take a step further and introduce several indicators, which are originally used to evaluate the clustering performance, according to the data structure after dimensionality reduction by t-SNE. Specifically, we propose to adopt within-cluster dispersion (WD), between-clusters dispersion (BD) and CalinskiHarabaz Index (CHI) Caliński & Harabasz (1974) to provide some rough yet practicable quantitative measures for reference. For K clusters, WD, BD and CHI are defined as:
WD(K) = K∑ k=1 n(k)∑ i=1 ‖xik − x̄k‖2, (8)
where xik represents the i-th datapoint belonging to class k and x̄k is the average mean of all n(k) datapoints that belong to class k. Datapoints belonging to the same class should be close enough to each other and WD measures the compactness within a cluster.
BD(K) = K∑ k=1 n(k)‖x̄k − x̄‖2, (9)
where x̄ represents the average mean of all datapoints. BD measures the distance between clusters. Intuitively, larger BD value indicates stronger discriminability between different feature clusters. Given K clusters and N datapoints in total (N = ∑ k n(k)), by combining WD and BD, the CHI is formulated as:
CHI(K) = BD(K) WD(K) · (N −K) (K − 1) . (10)
It is represented as the ratio of the between-clusters dispersion mean and the within-cluster dispersion. The CHI score is higher when clusters are dense and well separated, which relates to a standard concept of a cluster.
Rationality of Using Quantitative Measures with t-SNE. Notably, t-SNE is not a numerical technique but a probabilistic one. It minimizes the Kullback-Leibler (KL) divergence between the dis-
tributions that measure pairwise similarities of the input high-dimensional data and that of the corresponding low-dimensional points in the embedding. Further, t-SNE is a non-convex optimization process which is performed using a gradient descent method, as a result of which several optimization parameters need to be chosen, like perplexity, iterations and learning rate. Hence, the reconstruction solutions may differ due to the choice of different optimization parameters and the initial random states. In this paper, we used exactly the same optimization procedure for all experiments. Moreover, we conduct extensive experiments using different parameters and demonstrate that the quality of the optima does not vary much from run to run, which is also emphasized in the t-SNE paper. To make the quantitative analysis more statistically solid, for each projection process, we run t-SNE five times and report the average and standard deviations of every metric.
A.7 FROM SHALLOW TO DEEP SR NETWORKS
In the main paper, we reveal that a shallow 3-layer SRCNN Dong et al. (2014) does not manifest representational discriminability on degradation types. Thus, we hypothesize that only deep SR networks possess such degradation-related semantics. To verify the statement, we gradually deepen the depth of SRCNN and observe how its deep representations change. We construct SRCNN models with different layer depths from shallow 3 layers to 13 layers. We train these models on DIV2Kclean data (inputs are only downsampled without other degradations) and test them on classical SR benchmarks. As shown in Tab. 4, the model achieves better SR performance with the increase of network depth, suggesting that deeper networks and more parameters can lead to greater learning capacity. On the other hand, the deep representations also gradually manifest discriminability on degradation types, as depicted in Fig. 14. When the model only has 3 layers, its representations cannot distinguish different degradation types. However, when we increase the depth to 13 layers, the deep representations begin to show discriminability on degradation types, with the CHI score increasing to 168.12.
A.8 MORE APPLICATIONS
Evaluating the Generalization Ability. According to the discussions in Sec. 4.6, DDR can be used as an approximate evaluation metric for generalization ability. Specifically, given a trained model and several test datasets with different degradations, we can obtain their DDR features. By
evaluating the discriminability of the projection results (clustering effect), we can roughly measure the generalization performance over different degradation types. The worse the clustering effect, the better the generalizability. Fig .11 shows the DDR clustering of different models. RRDB (clean) is unable to deal with degraded data and obtains lower PSNR values on blur and noise inputs. Its CHI score is 322.16. By introducing degraded data into training, the model gains better generalization and the CHI score is 14.04. With DDR guidance, the generalization ability is further enhanced. The CHI score decreases to 4.95. The results are consistent with the results in the previous section. Interestingly, we do not need ground-truth images to evaluate the model generalization. A similar attempt has been made in recent work Liu et al. (2022). Note that CHI is only a rough index, which cannot accurately measure the minor differences. DDR shows the possibility of designing a generalization evaluation metric, but there is still a long way to realize this goal.
A.9 EXPLORATION ON DIFFERENT DEGRADATION DEGREES
Previously, we introduce deep degradation representations by showing that the deep representations of SR networks are discriminative to different degradation types (e.g., clean, blur and noise). How about the same degradation type but with different degraded degrees? Will the deep representa-
tions still be discriminative to them? To explore this question, more experiments and analysis are performed.
We test super-resolution networks on degraded images with different noise degrees and blur degrees. The results are depicted in Table. 7 and Fig. 17. It can be seen that the deep degradation representations are discriminative not only to cross-degradation (different degradation types) but also to intra-degradation (same degradation type but with different degrees). This suggests that even for the same type of degradation, different degradation degrees will also cause significant differences in features. The greater the difference between degradation degrees, the stronger the discriminability of feature representations. This also reflects another difference between the representation semantics of super-resolution network and classification network. For classification, the semantic discriminability of feature representations is generally discrete, because the semantics are associated with discrete object categories. Nevertheless, there appears to be a spectrum (continuous transition) for the discriminability of the deep degradation representations, i.e., the discriminability has a monotonic relationship with the divergence between degradation types and degrees. For example, the degradation difference between noise levels 10 and 20 is not that much distinct, and the discriminability of feature representations is relatively smaller, comparing with noise levels 10 and 30.
From Table 7, there are notable observations. 1) Comparing with blur degradation, noise degradation is easier to be discriminated. Yet, it is difficult to obtain deep representations that have strong discriminability for different blur levels. Even for GAN-based method, global residual (GR) is indispensable to obtain representations that can be discriminative to different blur levels. 2) The representations obtained by GAN-based method have more discriminative semantics to degradation types and degrees than those of MSE-based method. 3) Again, global residual can strengthen the representation discriminability for degradations.
A.10 EXPLORATION OF NETWORK STRUCTURE
In the main paper, we choose ResNet18 He et al. (2016) and SRResNet/SRGAN Ledig et al. (2017) as the backbones of classification and SR networks, respectively. In order to eliminate the influence of different network structures, we design a unified backbone framework, which is composed of the
same basic building modules but connected with different tails for downsampling and upsampling to conduct classification and super-resolution respectively.
The unified architecture is shown in Fig. 18. To differ from the residual block in the main paper, we adopt residual channel attention layer as basic building block, which is inspired by SENet Hu et al. (2018) and RCAN Zhang et al. (2018b). For classification, the network tail consists of three maxpooling layers and a fully connected layer; for super-resolution, the network tail consists of two pixel-shuffle layers to upsample the feature maps. According to the conclusions in the main paper, we adopt global residual (GR) in the network design to obtain deep degradation representations (DDR). Except the network structure, all the training protocols are kept the same as in the main paper. The training details are the same as depicted in Sec. A.5. After training, the unified backbone framework for classification yields an accuracy of 92.08% on CIFAR10 testing set.
The experimental results are shown in Fig. 19, Fig. 20 and Tab. 8. From the results, we can see that the observations are consistent with the findings in the main paper. It suggests that the semantic representations do not stem from network structures, but from the task itself. Hence, our findings are not only limited to specific structures but are universal.
A.11 MORE INSPIRATIONS AND FUTURE WORK
Disentanglement of Image Content and Degradation In plenty of image editing and synthesizing tasks, researchers seek to disentangle an image through different attributes, so that the image can be finely edited Karras et al. (2019); Ma et al. (2018); Deng et al. (2020b); Lee et al. (2018); Nitzan et al. (2020). For example, semantic face editing Shen et al. (2020a;b); Shen & Zhou (2020) aims at manipulating facial attributes of a given image, e.g., pose, gender, age, smile, etc. Most methods attempt to learn disentangled representations and to control the facial attributes by manipulating the latent space. In low-level vision, the deep degradation representations can make it possible to decompose an image into content and degradation information, which can promote a number of new areas, such as degradation transferring and degradation editing. Further, more in-depth research on deep degradation representations will also greatly improve our understanding of the nature of images.
A.12 DISCUSSIONS ON DIMENSIONALITY REDUCTION
Among the numerous dimensionality reduction techniques (e.g., PCA Hotelling (1933), CCA Demartines & Hérault (1997), LLE Roweis & Saul (2000), IsomapTenenbaum et al. (2000), SNEHinton & Roweis (2002)), t-Distributed Stochastic Neighbor Embedding (t-SNE) Van der Maaten & Hinton (2008) is a widely-used and effective algorithm. It can greatly capture the local structure of the high-dimensional data and simultaneously reveal global structure such as the presence of clusters at several scales. Following Donahue et al. (2014); Mnih et al. (2015); Wen et al. (2016); Zahavy et al. (2016); Veličković et al. (2017); Wang et al. (2020b); Huang et al. (2020), we also take advantage of the superior manifold learning capability of t-SNE for feature projection.
In this section we further explain the effectiveness of adopting t-SNE and why we choose to project hign-dimensional features into two-dimensional datapoints. We first compare the projection results of PCA and t-SNE. From the results shown in Fig. 21, it can be observed that the projected features by t-SNE are successfully clustered together according the semantic labels, while the projected features by PCA are not well separated. It is because that PCA is a linear dimension reduction method which cannot deal with complex non-linear data obtained by the neural networks. Thus, t-SNE is a better choice to conduct dimension reduction on CNN features. This suggests the effectiveness of t-SNE for the purpose of feature projection. Note that we do not claim t-SNE is the optimal or the best choice for dimensionality reduction. We just utilize t-SNE as a rational tool to show the trend behind deep representations, since t-SNE has been proven effective and practical in our experiments and other literatures.
Then, we discuss the dimensions to reduce. We conduct dimensionality reduction to different dimensions. Since the highest dimension supported by t-SNE is 3, we first compare the effect between the two-dimensional projected features and the three-dimensional projected features by t-SNE. The qualitative and quantitative results are shown in Fig. 21 and Tab. 9. When we reduce the features to three dimensions, the reduced representations also show discriminability to semantic labels. How-
ever, quantitative results show that two dimensions can better portray the discriminability than three or higher dimensions. For PCA, the results are similar. With higher dimensions, the discriminability decrease. Hence, it is reasonable to reduce high-dimensional features into two-dimensional datapoints. Such settings are also adopted in Donahue et al. (2014); Wang et al. (2020b); Veličković et al. (2017); Huang et al. (2020), which are proven effective.
A.13 VISUALIZATION OF FEATURE MAPS
So far, we have successfully revealed the degradation-related semantics in SR networks with dimensionality reduction. In this section, we directly visualize the deep feature maps extracted from SR networks to provide some intuitive and qualitative interpretations. Specifically, we extract the feature maps obtained from four models (SRResNet-wGR, SRResNet-woGR, SRGAN-wGR and SRGAN-woGR) on images with different degradations (clean, blur4, noise20), respectively. Then we treat each feature map as a one channel image and plot it. The visualized feature maps are shown in Fig. 22. We select 8 feature maps with the largest eigenvalues for display. The complete results are shown in the supplementary file.
Influence of degradations on feature maps. From Fig. 22(a), we can observe that the deep features obtained by SRResNet-woGR portray various characteristics of the input image, including edges, textures and contents. In particular, we highlight in “red rectangles” the features that retain most of the image content. As shown in Fig. 22(b), after applying blur and noise degradations to the input image, the extracted features appear similar degradations as well. For blurred/noisy input images, the extracted feature maps also contain homologous blur/noise degradations.
Effect of global residual. In Sec. 4.3, we have revealed the importance and effectiveness of global residual (GR) for obtaining deep degradation representations for SR networks. But why GR is so
important? What is the role of GR? Through visualization, we can provide a qualitative and intuitive explanation here. Comparing Fig. 22(a) and Fig. 22(b), it can be observed that by adopting GR, the extracted features seem to contain less components of original shape and content information. Thus, GR can help remove the redundant image content information and make the network concentrate more on obtaining features that are related to low-level degradation information.
Effect of GAN. Previously, we have discussed the difference between MSE-based and GAN-based SR methods in their deep representations. We find that GAN-based method can better obtain feature representations that are discriminative to different degradation types. As shown in Fig. 22(a) and Fig. 22(c), the feature maps extracted by GAN-based method contain less object shape and content information compared with MSE-based method. This partially explains why the deep representations of GAN-based method are more discriminative, even without global residual. Comparing Fig. 22(c) and Fig. 22(d), when there is global residual, the feature maps containing the image original content information are further reduced, leading to stronger discriminability to degradation types.
A.14 SAMPLES OF DIFFERENT DATASETS
In the main paper, we adopt several different datasets to conduct experiments. Fig. 23 displays some example images from these datasets.
(a) DIV2K-clean: the original DIV2K Agustsson & Timofte (2017) dataset. The high-resolution (HR) ground-truth (GT) images have 2K resolution and are of high visual quality. The lowresolution (LR) input images are downsampled from HR by bicubic interpolation, without any further degradations.
(b) DIV2K-noise: adding Gaussian noises to DIV2K-clean LR input, thus making it contain extra noise degradation. DIV2K-noise20 means the additive Gaussian noise level σ is 20, where the number denotes the noise level.
(c) DIV2K-blur: applying Gaussian blur to DIV2K-clean LR input, thus making it contain extra blur degradation. DIV2K-blur4 means the Gaussian blur width is 4.
(d) DIV2K-mild: officially synthesized from DIV2K Agustsson & Timofte (2017) dataset as challenge dataset Timofte et al. (2017; 2018), which contains noise, blur, pixel shifting and other degradations. The degradation modelling is unknown to challenge participants.
(e) Hollywood100: 100 images selected from Hollywood dataset Laptev et al. (2008), containing real-world old film frames with unknown degradations, which may have compression, noise, blur and other real-world degradations.
Dataset (a), (b), (c) and (d) have the same image contents but different degradations. However, we find that the deep degradation representations (DDR) obtained by SR networks have discriminability to these degradation types, even if the network has not seen these degradations at all during training. Further, for real-world degradation like in (e), the DDR are still able to discern it. | 1. What is the focus of the paper regarding image processing?
2. What are the strengths of the proposed approach, particularly in terms of exploring low-level vision models?
3. What are the weaknesses of the paper, especially regarding its claims and applications in image restoration?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper discovers an interesting phenomenon that the SR network is powerful in discriminating the image degradations instead of image contents, especially for well-trained deep networks with global residual and generative adversarial training. To validate the claim, the authors give some empirical evidence by visualizing the distribution of features via t-SNE. In addition, some inspirable analysis and applications are provided and reported.
Strengths And Weaknesses
Strength:
1 Exploring the intrinsic representation of low-level vision models is of great importance to help the design of new methods. This paper makes a step forward towards this goal. 2 This paper is easy to read, some analysis are inspirable.
Weakness:
1 The key issue of this paper is how this empirical finding can facilitate the SR task. In common sense, the restoration of textures, especially those regular patterns with strong perceptual prior, relies on the semantic information, or at least the similar patterns that are included in the training set. However, according to the findings in this paper, the model mainly learns the degradation-related information. Is this good or bad for the performance of SR? or in other words if the authors recommend the following researches to keep the global residual and the adversarial loss in the designing of SR network? 2 There lacks sufficient experiments to verify the performance of blind SR with the proposed DDR guidance. Detailed comparison against the state-of-the-art methods should be conducted.
Clarity, Quality, Novelty And Reproducibility
The clarity, quality and novelty is acceptable. |
ICLR | Title
Discovering Distinctive ``Semantics'' in Super-Resolution Networks
Abstract
Image super-resolution (SR) is a representative low-level vision problem. Although deep SR networks have achieved extraordinary success, we are still unaware of their working mechanisms. Specifically, whether SR networks can learn semantic information, or just perform complex mapping functions? What hinders SR networks from generalizing to real-world data? These questions not only raise our curiosity, but also influence SR network development. In this paper, we make the primary attempt to answer the above fundamental questions. After comprehensively analyzing the feature representations (via dimensionality reduction and visualization), we successfully discover the distinctive “semantics” in SR networks, i.e., deep degradation representations (DDR), which relate to image degradation instead of image content. We show that a well-trained deep SR network is naturally a good descriptor of degradation information. Our experiments also reveal two key factors (adversarial learning and global residual) that influence the extraction of such semantics. We further apply DDR in several interesting applications (such as distortion identification, blind SR and generalization evaluation) and achieve promising results, demonstrating the correctness and effectiveness of our findings.
1 INTRODUCTION
The emergence of deep convolutional neural network (CNN) has given birth to a large number of new solutions to low-level vision tasks (Dong et al., 2014; Zhang et al., 2017). Among these signs of progress, image super-resolution (SR) has enjoyed a great performance leap. Compared with traditional methods (e.g., interpolation (Keys, 1981) and sparse coding (Yang et al., 2008)), SR networks can achieve better performance with improved efficiency.
However, even if we have benefited a lot from the powerful CNNs, we have little knowledge about what happens in SR networks and what distinguishes them from traditional approaches on earth. Does the performance gain merely come from more complex mapping functions? Or is there anything different inside SR networks, like classification networks with discriminative capability? On the other hand, as a classic regression task, SR is expected to perform a continuous mapping from low-resolution (LR) to high-resolution (HR) images. It is generally a local operation without the consideration of the global context. But with the introduction of GAN-based models Ledig et al. (2017); Wang et al. (2018), more delicate SR textures can be generated. It seems that the network has learned some kind of semantic, which is beyond our common perception for regression tasks.
Then, we may raise the question: are there any “semantics” in SR networks? If yes, do these semantics have different definitions from those in classification networks? Existing literature cannot answer these questions, as there is little research on interpreting low-level vision deep models. Nevertheless, discovering the semantics in SR networks is of great importance. It can not only help us further understand the underlying working mechanisms, but also guide us to design better networks and evaluation algorithms.
In this study, we give affirmative answers to the above questions by unfolding the semantics hidden in super-resolution networks. Specifically, different from the artificially predefined semantics associated with object classes in high-level vision, semantics in SR networks are distinct in terms of image degradation instead of image content. Accordingly, we name such semantics deep degradation representations (DDR). More interestingly, such degradation-related semantics are spontaneously existing without any predefined labels. We reveal that a well-trained deep SR network is naturally a good descriptor of degradation information.
Notably, the semantics in this paper have different implications from those in high-level vision. Previously, researchers have disclosed the hierarchical nature of classification networks (Zeiler & Fergus, 2014; Gu et al., 2018). As the layer deepens, the learned features respond more to abstract high-level patterns (e.g., faces and legs), showing a stronger discriminability to object categories (see Fig. 4). However, similar research in low-level vision is absent, since there are no predefined semantic labels. In this paper, we reveal the differences in deep “semantics” between classification and SR networks, as illustrated in Fig. 1.
Our observation stems from a representative blind SR method – CinCGAN Yuan et al. (2018), and we further extend it to more common SR networks – SRResNet and SRGAN Ledig et al. (2017). We have also revealed more interesting phenomena to help interpret the semantics, including the analogy to classification networks and the influential factors for extracting DDR. Moreover, we improve the results of several tasks by exploiting DDR. We believe our findings could lay the groundwork for the interpretability of SR networks, and inspire more exploration of the mechanism of low-level vision deep models.
Contributions. 1) We have successfully discovered the “semantics” in SR networks, denoted as deep degradation representations (DDR). Through in-depth analysis, we also find that global residual learning and adversarial learning can facilitate the SR network to extract such degradation-related representations. 2) We reveal the differences in deep representations between classification and SR networks, for the first time. This further expands our knowledge of the deep representations of highand low-level vision models. 3) We exploit our findings to several fundamental tasks and achieve very appealing results, including distortion identification, blind SR and generalization evaluation.
2 RELATED WORK
Super-resolution. Super-resolution (SR) is a fundamental task in low-level vision, which aims to reconstruct the high-resolution (HR) image from the corresponding low-resolution (LR) counterpart. SRCNN (Dong et al., 2014) is the first proposed CNN-based method for SR. Since then, a large number of deep-learning-based methods have been developed (Dong et al., 2016; Lim et al., 2017; Zhang et al., 2018b; Ledig et al., 2017; Zhang et al., 2019). Generally, current CNN-based SR methods can be categorized into two groups. One is MSE-based method, which targets at minimizing the distortion (e.g., Mean Square Error) between the ground-truth HR image and super-resolved image to yield high PSNR values, such as SRCNN (Dong et al., 2014), VDSR (Kim et al., 2016), EDSR (Lim et al., 2017), RCAN (Zhang et al., 2018b), SAN (Dai et al., 2019), etc. The other is GAN-based method, which incorporates generative adversarial network (GAN) and perceptual loss (Johnson et al., 2016) to obtain perceptually pleasing results, such as SRGAN (Ledig et al., 2017),
Input CinCGAN (DIV2K-mild)
(a)
(b)
(c)
BM3D SRCNN (DIV2K-mild)
DIV2K-mild
DIV2K-noise
Hollywood
Figure 2: Different degraded input images and their corresponding outputs produced by CinCGAN (Yuan et al., 2018), BM3D (Dabov et al., 2007), and SRCNN (Dong et al., 2014). CinCGAN (Yuan et al., 2018) is trained on DIV2K-mild dataset in an unpaired manner. If the input image conforms to the training data distribution, CinCGAN will generate better restoration results than BM3D (a). Otherwise, it tends to ignore the unseen degradation types (b)&(c). On the other hand, the traditional method BM3D (Dabov et al., 2007) has stable performance and similar denoising effects on all input images, regardless of the input degradation types. Zoom in for the best view.
ESRGAN (Wang et al., 2018), RankSRGAN (Zhang et al., 2019), SROBB (Rad et al., 2019). Recently, blind SR has attracted more and more attention (Gu et al., 2019; Bell-Kligler et al., 2019; Luo et al., 2020; Wang et al., 2021),which aims to solve SR with unknown real-world degradation. A comprehensive survey for blind SR is newly proposed (Liu et al., 2021), which summarizes existing methods. We regard SR as a representative research object and study its deep semantic representations. It can also draw inspirations on other low-level vision tasks.
Network interpretability. At present, most existing works on neural network interpretability focus on high-level vision tasks, especially for image classification. Zhang et al. (Zhang et al., 2020) systematically reviewed existing literature on network interpretability and proposed a novel taxonomy to categorize them. Here we only discuss several classic works. By adopting deconvolutional networks (Zeiler et al., 2010), Zeiler et al. (Zeiler & Fergus, 2014) projected the downsampled lowresolution feature activations back to the input pixel space, and then performed a sensitivity analysis to reveal which parts of the image are important for classification. Simonyan et al. (Simonyan et al., 2013) generated a saliency map from the gradients through a single backpropagation pass. Based on class activation maps (CAM) (Zhou et al., 2016), Selvaraju et al. (Selvaraju et al., 2017) proposed Grad-CAM (Gradient-weighted CAM) to produce a coarse-grained attribution map of the important regions in the image, which was broadly applicable to any CNN-based architecture. For more information about the network interpretability literature, please refer to the survey paper (Zhang et al., 2020). However, for low-level vision tasks, similar researches are rare. Recently, the local attribution map (LAM) (Gu & Dong, 2021) has been proposed to interpret super-resolution networks, which can be used to localize the input features that influenced the network outputs. Besides, Wang et al. (Wang et al., 2020b) presented a pioneer work that bridges the representation relationship between high- and low-level vision. They learned the mapping between deep representations of lowand high-quality images, and leveraged it as a deep degradation prior (DDP) for low-quality image classification. Inspired by these previous works, we interpret SR networks from another new perspective. We dive into their deep feature representations, and discover the “semantics” of SR networks. More background knowledge is described in the supplementary file.
3 MOTIVATION
To begin with, we present an interesting phenomenon, which drives us to start exploring the deep representations of SR networks. It is well known that SR networks are superior to traditional methods in specific scenarios, but are inferior in generalization ability. In blind SR, the degradation types of the input test images are unknown. For traditional methods, they treat different images equally without distinction of degradation types, thus their performance is generally stable and predictable. How about the SR networks, especially those designed for blind SR?
CinCGAN (Yuan et al., 2018) is a representative solution for real-world SR without paired training data. It maps a degraded LR to its clean version using data distribution learning before conducting SR operation. However, we find that it still has a limited application scope even if CinCGAN is developed for blind settings. If the degradation of the input image is not included in the training data, CinCGAN will fail to transfer the degraded input to a clean one. More interestingly, instead of producing extra artifacts in the image, it seems that CinCGAN does not process the input image and retains all the original defects. Readers can refer to Fig. 2 for an illustration, where CinCGAN performs well on the testing image of the DIV2K-mild dataset (same distribution as its training data), but produces unsatisfactory results for other different degradation types. In other words, the network seems to figure out the specific degradation types within its training data distribution, and distribution mismatch may make the network “turn off” its ability. This makes the performance of CinCGAN unstable and unpredictable. For comparison, we process the above three types of degraded images by a traditional denoising method BM3D (Dabov et al., 2007) 1. The visual results show that BM3D has an obvious and stable denoising performance for all different degradation types. Although the results of BM3D may be mediocre (the image textures are largely over-smoothed), it does take effect on every input image. This observation reveals a significant discrepancy between traditional methods and SR networks.
The above interesting phenomenon indicates that the deep network has learned more than a regression function, since it demonstrates the ability to distinguish among different degradation types. Inspired by this observation, we try to find any semantics hidden in SR networks.
4 DIVING INTO THE DEEP DEGRADATION REPRESENTATIONS
4.1 DISCRIMINABILITY OF DEEP REPRESENTATIONS IN DEEP SR NETWORKS
Feature projection and visualization. Since the final outputs are always derived from features in CNN layers, we start the exploration with feature maps, especially the deep ones potentially with more global and abstract information. To interpret the deep features of CNN, one common and rational way is to convert the high-dimensional CNN feature maps into lower-dimensional datapoints that can be visualized in a scatterplot. Afterwards, one can intuitively understand the data structures and manifolds. Specifically, we adopt t-Distributed Stochastic Neighbor Embedding (t-SNE) (Van der Maaten & Hinton, 2008) for dimensionality reduction. This algorithm is commonly used in manifold learning, and it has been successfully applied in previous works (Donahue et al., 2014; Mnih et al., 2015; Wen et al., 2016; Zahavy et al., 2016; Veličković et al., 2017; Wang et al., 2020b; Huang et al., 2020) for feature projection and visualization. In our experiments, we first reduce the dimensionality of feature maps to a reasonable amount (50 in this paper) using PCA (Hotelling, 1933), then apply t-SNE to project the 50-dimensional representation to two-dimensional space, after which the results are visualized in a scatterplot. Furthermore, we also introduce CHI (Caliński & Harabasz, 1974) score to quantitatively evaluate the distributions of visualized datapoints. The CHI score is higher when clusters are well separated, which indicates stronger semantic discriminability.
What do the deep features of SR networks represent? As discussed in Sec.3, since CinCGAN performs differently on various degradations, we compare the features generated from three testing datasets: 1) DIV2K-mild: training and testing data used in CinCGAN, which are synthesized
1Note that BM3D is a denoising method while CinCGAN is able to upsample the resolution of the input image. Thus, after applying BM3D, we apply bicubic interpolation to unify the resolution of the output image. This is reasonable as we only evaluate their denoising effects.
from DIV2K (Agustsson & Timofte, 2017) dataset, containing noise, blur, pixel shifting and other degradations. 2) DIV2K-noise20: add Gaussian noise (σ = 20) to DIV2K set. 3) Hollywood100: 100 images selected from Hollywood dataset (Laptev et al., 2008), containing real-world old film degradations. Each test dataset includes 100 images.
As shown in Fig. 3(a), there is a strong feature discriminability for various degradations. Images with aligned contents but different degradation types are still separated into different clusters. 2 This phenomenon conforms to our observation that CinCGAN does treat various input degradations in different ways. It naturally reveals the “semantics” of deep representations in CinCGAN, which are closely related to the degradation types rather than the image content. For comparison, we may wonder whether traditional methods have similar behaviors (or ”semantics”). However, our feature analysis method can only work for deep models, which contain hierarchical feature maps. It is acknowledged that the simplest network – SRCNN can be analogous to a sparse-coding-based method, thus we can use SRCNN to shed light on the behaviors of traditional methods. We train an SRCNN3 with the same data as CinCGAN, and visualize the feature representations of the last layer in Fig. 3(b). It is obvious that different degradations cannot be clearly separated. This phenomenon is completely different from CinCGAN. We can conjecture that the degradation-related semantics only exist in deep models, not traditional methods or shallow networks. More analysis on shallow networks can be found in the supplementary file.
From CinCGAN to Generic SRGAN. Notably, the training of CinCGAN involves degraded images (DIV2K-mild). It actually performs simultaneous restoration and SR. We also wonder how this kind of degradation-related semantics manifests in classical SR networks (without exposure to other degradation types except for downsampling). Therefore, we adopt a generic GAN-based SR network SRGAN (Ledig et al., 2017; Wang et al., 2018) to conduct the visualization experiment. SRGAN is trained with DIV2K dataset (Agustsson & Timofte, 2017) with only bicubic-downsampled LR images. According to the common degradation modelling in low-level vision, we use three datasets with different degradation types for testing: 1) DIV2K-clean: the original DIV2K validation set containing only bicubic downsampling degradation, which conforms to the training data distribution. 2) DIV2K-blur: introduce blurring degradation with Gaussian blur kernel on the DIV2K-clean set. The kernel width is randomly sampled from [2, 4] for each image and the kernel size is fixed to 15×15. 3) DIV2K-noise: add Gaussian noises to the DIV2K-clean set. The noise level is randomly sampled from [5, 30] for each image. These three testing datasets are aligned in image content but different in degradation types.
As shown in Fig.3(d), a clustering trend similar to CinCGAN is clearly demonstrated. This provides stronger evidence for the existence of degradation-related semantics. Even if the three testing sets share the same content, they are still separated into distinct clusters according to the degradation types. In the supplementary file, similar phenomena are observed with other network structures. Note again, shallow SRCNN does not have such feature discriminability (see Fig.3(c)).
There, we successfully find the semantics hidden in deep SR networks. They are perceivable to humans when visualized in low-dimensional space. Specifically, semantics in deep SR networks are in terms of degradation types regardless of the image contents. Simply but vividly, we name this kind of semantics as deep degradation representations (DDR).
Is DDR a natural and trivial observation? No, there are three reasons. First, DDR has never been discussed before. The function of deep SR networks is beyond simple regression. The learned deep features can spontaneously characterize the image degradations, indicating that a well-trained deep SR network is naturally a good descriptor of degradation information. Note again that the deep SR networks have not observed any blurry or noisy data during training, but still have the discriminative ability on different degradations. Second, DDR in SR is not simply caused by different input patterns. We find that different networks will learn different semantic representations. For example, in Sec. 4.2, we reveal the differences in the learned representations between classification and SR Networks. In Sec. 4.3, we show that not all SR network structures can easily obtain DDR. DDR does not exist in specific cases and shallow networks. Third, DDR has important applications and inspirations. It can not only expand our understanding of the underlying mechanisms of low-level
2Note that the class labels in the scatterplots are only used to assign a color/symbol to the datapoints for better visualization.
3We use the same architecture as the original paper Dong et al. (2014) and add global residual for better visualization.
vision models, but also help promote the development of other tasks. In Sec. 5, we apply DDR to several fundamental tasks and achieve appealing results, implying the great potential of DDR.
4.2 DIFFERENCES IN SEMANTICS BETWEEN CLASSIFICATION AND SR NETWORKS
In the high-level vision, classification is one of the most representative tasks, where artificially predefined semantic labels on object classes are given as supervision. We choose ResNet18 (He et al., 2016) as the classification backbone and conduct experiments on CIFAR10 dataset (Krizhevsky et al., 2009). We extract the forward features of each input testing image4 at different network layers, as described in Fig. 3(e)-a.
Fig. 4 shows that as the network deepens, the extracted feature representations produce obvious discriminative clusters, i.e., the learned features are increasingly becoming semantically discriminative. Such discriminative semantics in classification networks are coherent with the artificially predefined labels. This is an intuitive and natural observation, on which lots of representation and discriminative learning methods are based (Wen et al., 2016; Oord et al., 2018; Lee et al., 2019; Wang et al., 2020b).
Further, we add blur and noise degradation to the CIFAR10 test images, and then investigate the feature representations of classification and SR networks. Note that no degradation is added to the training data. As shown in Fig. 5, after adding degradations to the test data, the deep representations obtained by the classification network (ResNet18) are still clustered by object categories, indicating that the features focus more on high-level object class information. On the contrary, the deep representations obtained by SR networks (SRResNet and SRGAN) are clustered with regard to degradation types. The features of the same object category are not clustered together, while those of the same degradation type are clustered together, showing different “semantic” discriminability. This phenomenon intuitively illustrates the differences in the deep semantic representations between SR and classification networks, i.e., degradation-related semantics and content-related semantics. More interestingly, the “semantics” in SR networks exists naturally, because the SR networks only see clean data without any input or labelled degradation information.
4.3 HOW DO GLOBAL RESIDUAL AND ADVERSARIAL LEARNING AFFECT THE DEEP REPRESENTATIONS?
Previously, we have elaborated on the deep degradation representations in CinCGAN, SRGAN and SRResNet. Nevertheless, we further discover that no arbitrary SR network structure has such a property. To be specific, we find two crucial factors that can influence the learned representations: i) image global residual (GR), and ii) generative adversarial learning (GAN).
4For efficiency, we selected 100 testing images of each category (1000 images in total).
Global Residual. We train two SRResNet networks – SRResNet (with global residual) and SRResNet-woGR (without global residual), as shown in Fig. 3. The two architectures are both common in practice (Kim et al., 2016; Shi et al., 2016). DIV2K (Agustsson & Timofte, 2017) dataset is used for training, where the LR images are bicubic-downsampled and clean. Readers can refer to the supplementary file for more details. After testing, the feature visualization analysis is shown in Fig. 6.
The results show that for MSE-based SR method, GR is essential for producing discriminative representations on degradation types. The features in “ResBlock16” of SRResNet have shown distinct discriminability, where the clean, blur, and noise data are clustered separately. On the contrary, SRResNet-woGR shows no discriminability even in deep layers. This phenomenon reveals that GR significantly impacts the learned feature representations. It is inferred that learning the global residual could remove most of the content information and make the network concentrate more on the contained degradation. This claim is also corroborated by visualizing the feature maps in the supplementary file.
Adversarial Learning. MSE-based and GAN-based methods are currently two prevailing trends in CNN-based SR methods. Previous studies only reveal that the output images of MSE-based and GAN-based methods are different, but the differences between their feature representations are rarely discussed. Since their learning mechanisms are quite different, will there be a discrepancy in their deep feature representations? We directly adopt SRResNet and SRResNet-woGR as generators. Consequently, we build two corresponding GAN-based models, namely SRGAN and SRGAN-woGR. After training, we perform the same test and analysis process mentioned earlier.
The results show that the deep features are bound to be discriminative to degradation types for the GAN-based method, whether there is GR or not. As shown in Fig. 7(d)(h), the deep representations in “ResBlock16” of SRGAN-woGR have already been clustered according to different degradation types. This suggests that the learned deep representations of MSE-based method and GAN-based method are dissimilar. Adversarial learning can help the network learn more informative features for distinguishing image degradation rather than image content.
4.4 HOW DOES DDR EVOLVE THROUGH THE TRAINING PROCESS?
We also reveal the relationship between the model performance and DDR discriminability. We select SRResNet models with different training iterations for testing. We report the model performance
on DIV2K-clean validation dataset and calculate the CHI scores to evaluate its discriminability with clean, blur and noise data. As shown in Fig. 8, as the training process goes, the performance of the model is improved, while the feature discriminability for degradation is also enhanced. From random initialization to 700k iterations, the CHI score increases significantly from 0.00 to 591.68, while the PSNR value improves by 2.87dB (Due to GR, the initial PSNR value is relatively high). The training data only include clean LR images, but the trained model has the ability to discriminate unseen degradation types. This clearly implies that a well-trained deep SR network is naturally a good descriptor of degradation information.
4.5 FURTHER DISCUSSION ON THE CAUSES OF DDR PHENOMENON
In the previous sections, we reveal several important factors that promote the manifestation of DDR phenomenon, including global residual, adversarial learning (Sec. 4.3) and training iterations (Sec. 4.4). Based on the above findings and more visualization results, we can analyze the causes of DDR more deeply. We visualize the feature maps of SRResNet-wGR, SRResNet-woGR, SRGAN-wGR and SRGAN-woGR on test images with different degradations in the Appendix.
The DDR phenomenon is mainly introduced by overfitting the degradation in the training data. Specifically, since the training data (DIV2K-clean) do not contain extra degradations, the trained SR network lacks the ability to deal with the unseen degradations. When feeding images with degradations (e.g., noise and blur), it will produce features with unprocessed noises or blurring. These patterned features naturally show a strong discriminability between different degradations. As for GR, models with GR produce features that contain less components of original content information. GR can help remove the redundant image content information and make the network concentrate more on degradation-related information. GAN training also enhances the high-frequency degradation information. Besides, prolonging the training iterations and deepening the network depth will make the network further overfit to the training data.
4.6 WHY SR NETWORKS CAN HARDLY GENERALIZE TO UNSEEN DEGRADATIONS?
Classical SR models (Dong et al., 2014; Lim et al., 2017) assume that the input LR images are generated by fixed downsampling kernel (e.g., bicubic). However, it is difficult to apply such simple SR models to real scenarios with unknown degradations. We claim that SR and restoration networks learn to overfit the distribution of degradations, rather than the distribution of natural clean images.
To verify our statements, we compare the representations between SRGAN-wGR models trained on clean data and clean+noise data, respectively. As presented in Fig. 9, if the model is trained only on clean LR data, the deep representations show strong discriminability to clean and noise data. In contrast, if the model sees noise data during training, such discriminability diminishes. The model will become more robust to more degradation types, as the distributions of the deep representations become unanimous. In summary, to improve the model generalization for various degradations, we need to diminish the feature discriminability to degradations. Adding more degraded data into training is a plausible way to enhance the generalization.
5 APPLICATIONS AND INSPIRATIONS
Image Distortion Identification Using DDR Features. Image distortion identification (Liang et al., 2020) is an important subsidiary pretreatment for many image processing systems, especially for image quality assessment (IQA). It aims to recognize the distortion type from the distorted images, so as to facilitate the downstream tasks (Mittal et al., 2012a; Gu et al., 2019; Liang et al., 2020). Previous methods usually resort to design handcrafted features that can distinguish different degradation types (Mittal et al., 2012a;b) or train a classification model via supervised learning (Kang et al., 2014; Bosse et al., 2017; Liang et al., 2020). Since DDR is related to image degradation, it can naturally be used as an excellent prior feature for image distortion identification. To obtain DDR, we do not need any degradation information but only a well-trained SR model (train on clean data). Following BRISQUE (Mittal et al., 2012a), we adopt the deep representations of SRGAN as input features (using PCA to reduce the original features to a 120-dimensional vector), and then use linear SVM to classify the degradation types of LIVE dataset (Sheikh et al., 2006). As shown in Tab. 1, compared with BRISQUE and MLLNet (Liang et al., 2020), DDR features achieve excellent results on recognizing different distortion types. More inspiringly, DDR is not obtained by any distortion-related supervision.
Blind SR with DDR Guidance. To super-resolve real images with unknown degradations, many blind SR methods resort to estimating and utilising the degradation information. For instance, IKC (Gu et al., 2019) iteratively corrects the estimated blur kernel, and DASR (Wang et al., 2021) implicitly learns the degradation representations by contrastive learning. Based on the findings of DDR, we adopt a trained SRGAN model to extract degradation embedding to promote blind SR models. RRDBNet (Wang et al., 2018) is adopted as the backbone. The DDR embedding is injected into each RRDB module by the StyleMod Karras et al. (2020) (see Fig. 10). The training data are described in Tab. 2, e.g., “b+n” means that the training data include blur and noise images. DDR guidance can help improve the model performance. Fig. 11 reveals that DDR guidance can make the deep features become more homogeneous (CHI scores drop from 14.04 to 4.95).
6 CONCLUSIONS
In this paper, we discover the deep degradation representations in deep SR networks, which are different from high-level vision networks. We demonstrate that a well-trained deep SR network is naturally a good descriptor of degradation information. We reveal the differences in deep representations between classification and SR networks. We draw a series of interesting observations on the intrinsic features of deep SR networks, such as the effects of global residual and adversarial learning. Further, we apply DDR to several fundamental tasks and achieve appealing results. The exploration on DDR is of great significance and inspiration for relevant work.
A APPENDIX
A.1 BACKGROUND
Since the emergence of deep convolutional neural network (CNN), a large number of computer vision tasks have been drastically promoted, including high-level vision tasks such as image classification Russakovsky et al. (2015); Simonyan & Zisserman (2015); He et al. (2016); Huang et al. (2017); Hu et al. (2018), object localization Ren et al. (2015); He et al. (2017); Redmon et al. (2016) and semantic segmentation Long et al. (2015); Badrinarayanan et al. (2017); Chen et al. (2017); Wang et al. (2020a), as well as low-level vision tasks such as image super-resolution Dong et al. (2014); Ledig et al. (2017); Wang et al. (2018); Zhang et al. (2019); Dai et al. (2019), denoising Zhang et al. (2017; 2018a); Gu et al. (2019); Quan et al. (2020), dehazing Cai et al. (2016); Zhang & Patel (2018); Dong et al. (2020); Deng et al. (2020a), etc. However, an interesting phenomenon is that even if we have successfully applied CNNs to many tasks, yet we still do not have a thorough understanding of its intrinsic working mechanism.
To better understand the behaviors of CNN, many efforts have been put in the neural network interpretability for high-level vision Simonyan et al. (2013); Samek et al. (2017); Zeiler & Fergus (2014); Selvaraju et al. (2017); Montavon et al. (2018); Karpathy et al. (2015); Mahendran & Vedaldi (2016); Zhang et al. (2020); Adebayo et al. (2018). Most of them attempt to interpret the CNN decisions by visualization techniques, such as visualizing the intermediate feature maps (or saliency maps and class activation maps) Simonyan et al. (2013); Zeiler & Fergus (2014); Adebayo et al. (2018); Zhou et al. (2016); Selvaraju et al. (2017), computing the class notion images which maximize the class score Simonyan et al. (2013), or projecting feature representations Wen et al. (2016); Wang et al. (2020b); Zhu et al. (2018); Huang et al. (2020). For high-level vision tasks, especially image classification, researchers have established a set of techniques for interpreting deep models and have built up a preliminary understanding of CNN behaviors Gu et al. (2018). One representative work is done by Zeiler et al. Zeiler & Fergus (2014), who reveal the hierarchical nature of CNN by visualizing and interpreting the feature maps: the shallow layers respond to low-level features such as corners, curves and other edge/color conjunctions; the middle layers capture more complex texture combinations; the deeper layers are learned to encode more abstract and class-specific patterns, e.g., faces and legs. These patterns can be well interpreted by human perception and help partially explain the CNN decisions for high-level vision tasks.
As for low-level vision tasks, however, similar research work is absent. The possible reasons are as follows. In high-level vision tasks, there are usually artificially predefined semantic labels/categories. Thus, we can intuitively associate feature representations with these labels. Nevertheless, in low-level vision tasks, there is no explicit predefined semantics, making it hard to map the representations into a domain that the human can make sense of. Further, high-level vision usually performs classification in a discrete target domain with distinct categories, while low-level vision aims to solve a regression problem with continuous output values. Hence, without the guidance of predefined category semantics, it seems not so straightforward to interpret low-level vision networks.
In this paper, we take super-resolution (SR), one of the most representative tasks in low-level vision, as research object. Previously, it is generally thought that the features extracted from the SR network have no specific “semantic” information, and the network simply learns some complex non-linear functions to model the relations between network input and output. Are CNN features SR networks really in lack of any semantics? Can we find any kind of “semantics” in SR networks? In this paper, we aim to give an answer to these questions. We reveal that there are semantics existing in SR networks. We first discover and interpret the “semantics” of deep representations in SR networks. But different from high-level vision networks, such semantics relate to the image degradation types and degrees. Accordingly, we designate the deep semantic representations in SR networks as deep degradation representations (DDR).
A.2 LIMITATIONS
In this paper, we only explore the deep representations of SR networks. Other low-level vision networks are also worth exploring. We apply DDR to three tasks without too elaborate design in the application parts. For blind SR, we make a simple attempt to improve the model performance. The design is not optimal. We believe that there should be a more efficient and effective way to utilize DDR. For generalization evaluation, DDR can only evaluate the model generalization under constrained conditions. It shows the possibility of designing a generalization evaluation metric, but there is still a long way to realize this goal.
A.3 DEEP REPRESENTATIONS OF REAL-WORLD IMAGES
In the main paper, we mainly conduct experiments on synthetic degradations. The difficulty of realworld dataset is that it is hard to keep the content the same but change the degradations. If we simply use two real-world datasets which contains different contents and different degradations, it is hard to say whether the feature discriminability is targeted at image content or at image degradation. Hence, synthetic data at least can control the variables.
In addition, we find a plausible real-world dataset Real-City100, which is proposed in paper Cameral SR. The authors use iPhoneX and NikonD5500 devices to capture controllable images. By adjusting the cameral focal length, each camera captures paired images with the same content but different resolutions. The low-resolution images contain real-world degradations such as real noise and real
blur. We test SRGAN on this dataset and obtain corresponding visualization results, as shown in 12. It can be seen that the deep representations of SRGAN can still distinguish among different degradations across different devices.
A.4 CLASSIFICATION VS. SUPER-RESOLUTION
A.4.1 FORMULATION
Classification. Classification aims to categorize an input image X into a discrete object class:
Ŷ = GCL(X), (1)
where GCL represents the classification network, and Ŷ ∈ RC is the predicted probability vector indicating which of the C categoriesX belongs to. In practice, cross-entropy loss is usually adopted to train the classification network:
CE(Y, Ŷ ) = − C∑ i=1 yilogŷi, (2)
where Y ∈ RC is a one-hot vector representing the ground-truth class label. ŷi is the i-th row element of Ŷ , indicating the predicted probability that X belongs to the i-th class.
Super-resolution. A general image degradation process can be model as follows: X = (Y ⊗ k) ↓s +n, (3)
where Y is the high-resolution (HR) image and ⊗ denotes the convolution operation. X is the degraded high-resolution (LR) image. There are three types of degradation in this model: blur kernel k, downsampling operation ↓s and additive noise n. Hence, super-resolution can be regarded as a superset of other restoration tasks like denoising and deblurring.
Super-resolution (SR) is the inverse problem of Equ. (3). Given the input LR image X ∈ RM×N , the super-resolution network attempts to produce its HR version:
Ŷ = GSR(X), (4)
where GSR represents the super-resolution network, Ŷ ∈ RsM×sN is the predicted HR image and s is the upscaling factor. This procedure can be regarded as a typical regression task. At present, there are two groups of method: MSE-based and GAN-based methods. The former one treats SR as a reconstruction problem, which utilizes pixel-wise loss such as L2 loss to achieve high PSNR values.
L2(Y, Ŷ ) = 1
r2NM rN∑ i=1 rM∑ j=1 ‖Yi,j − Ŷi,j‖22. (5)
This is the most widely used loss function in many image restoration tasks Dong et al. (2014); Lim et al. (2017); Zhang et al. (2018b;a); Cai et al. (2016); He et al. (2020). However, such loss tends to produce over-smoothed images. To generate photo-realistic SR results, the latter method incorporates adversarial learning and perceptual loss to benefit better visual perception. The optimization is expressed as following min-max problem:
min θGSR max θDSR EY∼pHR [logDSR(Y )]
+ EX∼pLR [log(1−DSR(GSR(X)))]. (6)
In such adversarial learning, a discriminatorDSR is introduced to distinguish super-resolved images from real HR images. Then, the generator loss is defined as:
LG = − logDSR(GSR(X)). (7)
From the formulation, we can clearly see that image classification and image super-resolution represent two typical tasks in machine learning: classification and regression. The output of the classification task is discrete, while the output of the regression task is continuous.
A.4.2 ARCHITECTURES
Due to the different output types, the CNN architectures of classification and super-resolution networks also differ. Generally, classification networks often contain multiple downsampling layers (e.g., pooling and strided convolution) to gradually reduce the spatial resolution of feature maps. After several convolutional and downsampling layers, there may be one or more fully-connected layers to aggregate global semantic information and generate a vector containing C elements. For the output layer, the SoftMax operator is frequently used to normalize the previously obtained vector into a probabilistic representation. Some renowned classification network structures include AlexNet Krizhevsky et al. (2012), VGG Simonyan & Zisserman (2015), ResNet He et al. (2016), InceptionNet Szegedy et al. (2015); Ioffe & Szegedy (2015); Szegedy et al. (2017), DenseNet Huang et al. (2017), SENetBadrinarayanan et al. (2017), etc.
Unlike classification networks, super-resolution networks usually do not rely on downsampling layers, but upsampling layers (e.g., bilinear upsampling, transposed convolution Zeiler et al. (2010) or subpixel convolution Shi et al. (2016)). Thus, the spatial resolution of feature maps would increase. Another difference is that the output of the SR network is a three-channel image, rather than an abstract probability vector. The well-known SR network structures include SRCNN Dong et al. (2014), FSRCNN Dong et al. (2016), SRResNet Ledig et al. (2017), RDN Zhang et al. (2018c), RCAN Zhang et al. (2018b), etc. An intuitive comparison of classification and SR networks in CNN architecture is shown in Fig. 18. We can notice that one is gradually downsampling, and the other is gradually upsampling, which displays the discrepancy between high-level vision and low-level vision tasks in structure designing.
Although there are several important architectural differences, classification networks and SR networks can share and adopt some proven effective building modules, like skip connection He et al. (2016); Lim et al. (2017) and attention mechanismHu et al. (2018); Zhang et al. (2018b).
A.5 IMPLEMENTATION DETAILS
In the main paper, we conduct experiments on ResNet18 He et al. (2016) and SRResNet/SRGAN Ledig et al. (2017). We elaborate more details on the network structures and training settings here.
For ResNet18, we directly adopt the network structure depicted in He et al. (2016). Cross-entropy loss (Eq. 2) is used as the loss function. The learning rate is initialized to 0.1 and decreased with a cosine annealing strategy. We apply SGD optimizer with weight decay 5×10−4. The trained model yields an accuracy of 92.86% on CIFAR10 testing set which consists of 10, 000 images.
For SRResNet-wGR/SRResNet-woGR, we stack 16 residual blocks (RB) as shown in Fig. 3 of the main paper. The residual block is the same as depicted in Wang et al. (2018), in which all the BN layers are removed. Two Pixel-shuffle layers Shi et al. (2016) are utilized to conduct upsampling in the network, while the global residual branch is upsampled by bilinear interpolation. L1 loss is adopted as the loss function. The learning rate is initialized to 2 × 10−4 and is halved at [100k, 300k, 500k, 600k] iterations. A total of 600, 000 iterations are executed.
For SRGAN-wGR/SRGAN-woGR, the generator is the same as SRResNet-wGR/SRResNet-woGR. The discriminator is designed as in Ledig et al. (2017). Adversarial loss (Eq. 7) and perceptual loss Johnson et al. (2016) are combined as the loss functions, which are kept the same as in Ledig et al. (2017). The learning rate of both generator and discriminator is initialized to 1×10−4 and is halved at [50k, 100k, 200k, 300k] iterations. A total of 600, 000 iterations are executed. For all the superresolution networks, we apply Adam optimizer Kingma & Ba (2014) with β1 = 0.9 and β2 = 0.99. All the training LR patches are of size 128 × 128. When testing, 32 × 32 patches are fed into the networks to obtain deep features. In practice, we find that the patch size has little effect on revealing the deep degradation representations. All above models are trained on PyTorch platform with GeForce RTX 2080 Ti GPUs.
For the experiment of distortion identification, we use the aforementioned trained models to conduct inferencing on the LIVE dataset Sheikh et al. (2006). We crop the central 96 × 96 patch of each image to feed into the SR networks and obtain the corresponding deep representations. Then, the deep representations of each image are reduced to 120-dimensional vector using PCA. Afterwards, the linear SVM is adopted as the classification tail. In practice, we find that the vector dimension can be even larger for better performance. Notably, unlike previous methods, the features here are not trained on any degradation related labels or signals. The SR networks are only trained using clean data. However, the deep representations can be excellent prior features for recognizing various distortion types. This is of great importance and very encouraging.
A.6 DEFINITIONS OF WD, BD AND CHI
In Sec. 3.1 of the main paper, we describe the adopted analysis method on deep feature representations. Many other literatures also have adopted similar approaches to interpret and visualize the deep models, such as Graph Attention Network Veličković et al. (2017), Recurrent Networks Karpathy et al. (2015), Deep Q-Network Zahavy et al. (2016) and Neural Models in NLP Li et al. (2015). Most aforementioned researches adopt t-SNE as a qualitative analysis technique. To better illustrate and quantitatively measure the semantic discriminability of deep feature representations, we take a step further and introduce several indicators, which are originally used to evaluate the clustering performance, according to the data structure after dimensionality reduction by t-SNE. Specifically, we propose to adopt within-cluster dispersion (WD), between-clusters dispersion (BD) and CalinskiHarabaz Index (CHI) Caliński & Harabasz (1974) to provide some rough yet practicable quantitative measures for reference. For K clusters, WD, BD and CHI are defined as:
WD(K) = K∑ k=1 n(k)∑ i=1 ‖xik − x̄k‖2, (8)
where xik represents the i-th datapoint belonging to class k and x̄k is the average mean of all n(k) datapoints that belong to class k. Datapoints belonging to the same class should be close enough to each other and WD measures the compactness within a cluster.
BD(K) = K∑ k=1 n(k)‖x̄k − x̄‖2, (9)
where x̄ represents the average mean of all datapoints. BD measures the distance between clusters. Intuitively, larger BD value indicates stronger discriminability between different feature clusters. Given K clusters and N datapoints in total (N = ∑ k n(k)), by combining WD and BD, the CHI is formulated as:
CHI(K) = BD(K) WD(K) · (N −K) (K − 1) . (10)
It is represented as the ratio of the between-clusters dispersion mean and the within-cluster dispersion. The CHI score is higher when clusters are dense and well separated, which relates to a standard concept of a cluster.
Rationality of Using Quantitative Measures with t-SNE. Notably, t-SNE is not a numerical technique but a probabilistic one. It minimizes the Kullback-Leibler (KL) divergence between the dis-
tributions that measure pairwise similarities of the input high-dimensional data and that of the corresponding low-dimensional points in the embedding. Further, t-SNE is a non-convex optimization process which is performed using a gradient descent method, as a result of which several optimization parameters need to be chosen, like perplexity, iterations and learning rate. Hence, the reconstruction solutions may differ due to the choice of different optimization parameters and the initial random states. In this paper, we used exactly the same optimization procedure for all experiments. Moreover, we conduct extensive experiments using different parameters and demonstrate that the quality of the optima does not vary much from run to run, which is also emphasized in the t-SNE paper. To make the quantitative analysis more statistically solid, for each projection process, we run t-SNE five times and report the average and standard deviations of every metric.
A.7 FROM SHALLOW TO DEEP SR NETWORKS
In the main paper, we reveal that a shallow 3-layer SRCNN Dong et al. (2014) does not manifest representational discriminability on degradation types. Thus, we hypothesize that only deep SR networks possess such degradation-related semantics. To verify the statement, we gradually deepen the depth of SRCNN and observe how its deep representations change. We construct SRCNN models with different layer depths from shallow 3 layers to 13 layers. We train these models on DIV2Kclean data (inputs are only downsampled without other degradations) and test them on classical SR benchmarks. As shown in Tab. 4, the model achieves better SR performance with the increase of network depth, suggesting that deeper networks and more parameters can lead to greater learning capacity. On the other hand, the deep representations also gradually manifest discriminability on degradation types, as depicted in Fig. 14. When the model only has 3 layers, its representations cannot distinguish different degradation types. However, when we increase the depth to 13 layers, the deep representations begin to show discriminability on degradation types, with the CHI score increasing to 168.12.
A.8 MORE APPLICATIONS
Evaluating the Generalization Ability. According to the discussions in Sec. 4.6, DDR can be used as an approximate evaluation metric for generalization ability. Specifically, given a trained model and several test datasets with different degradations, we can obtain their DDR features. By
evaluating the discriminability of the projection results (clustering effect), we can roughly measure the generalization performance over different degradation types. The worse the clustering effect, the better the generalizability. Fig .11 shows the DDR clustering of different models. RRDB (clean) is unable to deal with degraded data and obtains lower PSNR values on blur and noise inputs. Its CHI score is 322.16. By introducing degraded data into training, the model gains better generalization and the CHI score is 14.04. With DDR guidance, the generalization ability is further enhanced. The CHI score decreases to 4.95. The results are consistent with the results in the previous section. Interestingly, we do not need ground-truth images to evaluate the model generalization. A similar attempt has been made in recent work Liu et al. (2022). Note that CHI is only a rough index, which cannot accurately measure the minor differences. DDR shows the possibility of designing a generalization evaluation metric, but there is still a long way to realize this goal.
A.9 EXPLORATION ON DIFFERENT DEGRADATION DEGREES
Previously, we introduce deep degradation representations by showing that the deep representations of SR networks are discriminative to different degradation types (e.g., clean, blur and noise). How about the same degradation type but with different degraded degrees? Will the deep representa-
tions still be discriminative to them? To explore this question, more experiments and analysis are performed.
We test super-resolution networks on degraded images with different noise degrees and blur degrees. The results are depicted in Table. 7 and Fig. 17. It can be seen that the deep degradation representations are discriminative not only to cross-degradation (different degradation types) but also to intra-degradation (same degradation type but with different degrees). This suggests that even for the same type of degradation, different degradation degrees will also cause significant differences in features. The greater the difference between degradation degrees, the stronger the discriminability of feature representations. This also reflects another difference between the representation semantics of super-resolution network and classification network. For classification, the semantic discriminability of feature representations is generally discrete, because the semantics are associated with discrete object categories. Nevertheless, there appears to be a spectrum (continuous transition) for the discriminability of the deep degradation representations, i.e., the discriminability has a monotonic relationship with the divergence between degradation types and degrees. For example, the degradation difference between noise levels 10 and 20 is not that much distinct, and the discriminability of feature representations is relatively smaller, comparing with noise levels 10 and 30.
From Table 7, there are notable observations. 1) Comparing with blur degradation, noise degradation is easier to be discriminated. Yet, it is difficult to obtain deep representations that have strong discriminability for different blur levels. Even for GAN-based method, global residual (GR) is indispensable to obtain representations that can be discriminative to different blur levels. 2) The representations obtained by GAN-based method have more discriminative semantics to degradation types and degrees than those of MSE-based method. 3) Again, global residual can strengthen the representation discriminability for degradations.
A.10 EXPLORATION OF NETWORK STRUCTURE
In the main paper, we choose ResNet18 He et al. (2016) and SRResNet/SRGAN Ledig et al. (2017) as the backbones of classification and SR networks, respectively. In order to eliminate the influence of different network structures, we design a unified backbone framework, which is composed of the
same basic building modules but connected with different tails for downsampling and upsampling to conduct classification and super-resolution respectively.
The unified architecture is shown in Fig. 18. To differ from the residual block in the main paper, we adopt residual channel attention layer as basic building block, which is inspired by SENet Hu et al. (2018) and RCAN Zhang et al. (2018b). For classification, the network tail consists of three maxpooling layers and a fully connected layer; for super-resolution, the network tail consists of two pixel-shuffle layers to upsample the feature maps. According to the conclusions in the main paper, we adopt global residual (GR) in the network design to obtain deep degradation representations (DDR). Except the network structure, all the training protocols are kept the same as in the main paper. The training details are the same as depicted in Sec. A.5. After training, the unified backbone framework for classification yields an accuracy of 92.08% on CIFAR10 testing set.
The experimental results are shown in Fig. 19, Fig. 20 and Tab. 8. From the results, we can see that the observations are consistent with the findings in the main paper. It suggests that the semantic representations do not stem from network structures, but from the task itself. Hence, our findings are not only limited to specific structures but are universal.
A.11 MORE INSPIRATIONS AND FUTURE WORK
Disentanglement of Image Content and Degradation In plenty of image editing and synthesizing tasks, researchers seek to disentangle an image through different attributes, so that the image can be finely edited Karras et al. (2019); Ma et al. (2018); Deng et al. (2020b); Lee et al. (2018); Nitzan et al. (2020). For example, semantic face editing Shen et al. (2020a;b); Shen & Zhou (2020) aims at manipulating facial attributes of a given image, e.g., pose, gender, age, smile, etc. Most methods attempt to learn disentangled representations and to control the facial attributes by manipulating the latent space. In low-level vision, the deep degradation representations can make it possible to decompose an image into content and degradation information, which can promote a number of new areas, such as degradation transferring and degradation editing. Further, more in-depth research on deep degradation representations will also greatly improve our understanding of the nature of images.
A.12 DISCUSSIONS ON DIMENSIONALITY REDUCTION
Among the numerous dimensionality reduction techniques (e.g., PCA Hotelling (1933), CCA Demartines & Hérault (1997), LLE Roweis & Saul (2000), IsomapTenenbaum et al. (2000), SNEHinton & Roweis (2002)), t-Distributed Stochastic Neighbor Embedding (t-SNE) Van der Maaten & Hinton (2008) is a widely-used and effective algorithm. It can greatly capture the local structure of the high-dimensional data and simultaneously reveal global structure such as the presence of clusters at several scales. Following Donahue et al. (2014); Mnih et al. (2015); Wen et al. (2016); Zahavy et al. (2016); Veličković et al. (2017); Wang et al. (2020b); Huang et al. (2020), we also take advantage of the superior manifold learning capability of t-SNE for feature projection.
In this section we further explain the effectiveness of adopting t-SNE and why we choose to project hign-dimensional features into two-dimensional datapoints. We first compare the projection results of PCA and t-SNE. From the results shown in Fig. 21, it can be observed that the projected features by t-SNE are successfully clustered together according the semantic labels, while the projected features by PCA are not well separated. It is because that PCA is a linear dimension reduction method which cannot deal with complex non-linear data obtained by the neural networks. Thus, t-SNE is a better choice to conduct dimension reduction on CNN features. This suggests the effectiveness of t-SNE for the purpose of feature projection. Note that we do not claim t-SNE is the optimal or the best choice for dimensionality reduction. We just utilize t-SNE as a rational tool to show the trend behind deep representations, since t-SNE has been proven effective and practical in our experiments and other literatures.
Then, we discuss the dimensions to reduce. We conduct dimensionality reduction to different dimensions. Since the highest dimension supported by t-SNE is 3, we first compare the effect between the two-dimensional projected features and the three-dimensional projected features by t-SNE. The qualitative and quantitative results are shown in Fig. 21 and Tab. 9. When we reduce the features to three dimensions, the reduced representations also show discriminability to semantic labels. How-
ever, quantitative results show that two dimensions can better portray the discriminability than three or higher dimensions. For PCA, the results are similar. With higher dimensions, the discriminability decrease. Hence, it is reasonable to reduce high-dimensional features into two-dimensional datapoints. Such settings are also adopted in Donahue et al. (2014); Wang et al. (2020b); Veličković et al. (2017); Huang et al. (2020), which are proven effective.
A.13 VISUALIZATION OF FEATURE MAPS
So far, we have successfully revealed the degradation-related semantics in SR networks with dimensionality reduction. In this section, we directly visualize the deep feature maps extracted from SR networks to provide some intuitive and qualitative interpretations. Specifically, we extract the feature maps obtained from four models (SRResNet-wGR, SRResNet-woGR, SRGAN-wGR and SRGAN-woGR) on images with different degradations (clean, blur4, noise20), respectively. Then we treat each feature map as a one channel image and plot it. The visualized feature maps are shown in Fig. 22. We select 8 feature maps with the largest eigenvalues for display. The complete results are shown in the supplementary file.
Influence of degradations on feature maps. From Fig. 22(a), we can observe that the deep features obtained by SRResNet-woGR portray various characteristics of the input image, including edges, textures and contents. In particular, we highlight in “red rectangles” the features that retain most of the image content. As shown in Fig. 22(b), after applying blur and noise degradations to the input image, the extracted features appear similar degradations as well. For blurred/noisy input images, the extracted feature maps also contain homologous blur/noise degradations.
Effect of global residual. In Sec. 4.3, we have revealed the importance and effectiveness of global residual (GR) for obtaining deep degradation representations for SR networks. But why GR is so
important? What is the role of GR? Through visualization, we can provide a qualitative and intuitive explanation here. Comparing Fig. 22(a) and Fig. 22(b), it can be observed that by adopting GR, the extracted features seem to contain less components of original shape and content information. Thus, GR can help remove the redundant image content information and make the network concentrate more on obtaining features that are related to low-level degradation information.
Effect of GAN. Previously, we have discussed the difference between MSE-based and GAN-based SR methods in their deep representations. We find that GAN-based method can better obtain feature representations that are discriminative to different degradation types. As shown in Fig. 22(a) and Fig. 22(c), the feature maps extracted by GAN-based method contain less object shape and content information compared with MSE-based method. This partially explains why the deep representations of GAN-based method are more discriminative, even without global residual. Comparing Fig. 22(c) and Fig. 22(d), when there is global residual, the feature maps containing the image original content information are further reduced, leading to stronger discriminability to degradation types.
A.14 SAMPLES OF DIFFERENT DATASETS
In the main paper, we adopt several different datasets to conduct experiments. Fig. 23 displays some example images from these datasets.
(a) DIV2K-clean: the original DIV2K Agustsson & Timofte (2017) dataset. The high-resolution (HR) ground-truth (GT) images have 2K resolution and are of high visual quality. The lowresolution (LR) input images are downsampled from HR by bicubic interpolation, without any further degradations.
(b) DIV2K-noise: adding Gaussian noises to DIV2K-clean LR input, thus making it contain extra noise degradation. DIV2K-noise20 means the additive Gaussian noise level σ is 20, where the number denotes the noise level.
(c) DIV2K-blur: applying Gaussian blur to DIV2K-clean LR input, thus making it contain extra blur degradation. DIV2K-blur4 means the Gaussian blur width is 4.
(d) DIV2K-mild: officially synthesized from DIV2K Agustsson & Timofte (2017) dataset as challenge dataset Timofte et al. (2017; 2018), which contains noise, blur, pixel shifting and other degradations. The degradation modelling is unknown to challenge participants.
(e) Hollywood100: 100 images selected from Hollywood dataset Laptev et al. (2008), containing real-world old film frames with unknown degradations, which may have compression, noise, blur and other real-world degradations.
Dataset (a), (b), (c) and (d) have the same image contents but different degradations. However, we find that the deep degradation representations (DDR) obtained by SR networks have discriminability to these degradation types, even if the network has not seen these degradations at all during training. Further, for real-world degradation like in (e), the DDR are still able to discern it. | 1. What is the focus of the paper regarding feature representations and semantics in SR networks?
2. What are the strengths and weaknesses of the paper, particularly in terms of empirical settings and real-world applications?
3. Do you have any concerns about the significance of the work and its contributions to future low-level research and practical SR applications?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper aims to analyze the feature representations and explore the "semantics" in SR networks, namely, deep degradation representations. And authors reveal two factors, i.e., adversarial learning and global residual, which influence the extraction of "semantics".
Strengths And Weaknesses
[Strength]
The paper is well-written and easy to understand.
The paper provides comprehensive analyses, demonstrations, and discussions on DDR. The applications to blind SR, distortion identification and generalization evaluation are also interesting.
[Weakness]
The main concern is that the issue this paper explores would be trivial for further low-level research and realistic SR applications. There are three aspects that I doubt the significance of this work:
Empirical settings: All the empirical demonstrations and the important observations and conclusions are mainly based on synthetic data, including artificial blur and noise. Particularly, it is strange that for noise, two version datasets are used for demonstrations, namely, DIV2K-noise20 and DIV2K-noise. For example, in Fig.3, subfigure (a)~(d) has different datasets.
Although it claims Hollywood100 with real-world old film degradations, it has a stable intrinsic property, indicating film style, image degradation, etc. Even though it presents a different feature distribution from DIV2K-mild/-noise in Fig.3, it is not direct to draw the conclusion that this difference results from the image degradation. I do not think the analyses for DDR or "semantic" are convincing. Besides, why not use a more complex real-world SR dataset, e.g., RealSR, rather than Hollywood100?
If ignoring the problems above, it assumes that the observed "semantics" hold. In this case, are those observations and conclusions important for the SR task, especially for real-world SR? The paper has few insights on how to indeed address SR problems. Although there are discussions on blind SR, the paper just conducts the evaluation on synthetic data and shows the learned features. What is the importance of the observation from Fig.11 (in Page 9, "Fig.11 reveals that DDR guidance can make the deep features become more homogenous")? Besides, the demonstrations in Fig.11 are conducted on Urban100? it is also confusing. Urban100 is not mentioned in main paper.
On one hand, the demonstrations and analyses are based on very few SR methods with simple network architectures, for example, not including RCAN with attention mechanism or Transformer based networks. It would be misleading, confusing, and not convincing to understand the contributions of this paper, let alone do further SR research based on this paper. On the other hand, most of the experiments are conducted on synthetic data with artificial image degradation. It is ideal and hand-craft, showing few promising insights on further research.
Clarity, Quality, Novelty And Reproducibility
The paper has an excellent writing and a clear idea. But I strongly doubt its significance to the related research. |
ICLR | Title
Discovering Distinctive ``Semantics'' in Super-Resolution Networks
Abstract
Image super-resolution (SR) is a representative low-level vision problem. Although deep SR networks have achieved extraordinary success, we are still unaware of their working mechanisms. Specifically, whether SR networks can learn semantic information, or just perform complex mapping functions? What hinders SR networks from generalizing to real-world data? These questions not only raise our curiosity, but also influence SR network development. In this paper, we make the primary attempt to answer the above fundamental questions. After comprehensively analyzing the feature representations (via dimensionality reduction and visualization), we successfully discover the distinctive “semantics” in SR networks, i.e., deep degradation representations (DDR), which relate to image degradation instead of image content. We show that a well-trained deep SR network is naturally a good descriptor of degradation information. Our experiments also reveal two key factors (adversarial learning and global residual) that influence the extraction of such semantics. We further apply DDR in several interesting applications (such as distortion identification, blind SR and generalization evaluation) and achieve promising results, demonstrating the correctness and effectiveness of our findings.
1 INTRODUCTION
The emergence of deep convolutional neural network (CNN) has given birth to a large number of new solutions to low-level vision tasks (Dong et al., 2014; Zhang et al., 2017). Among these signs of progress, image super-resolution (SR) has enjoyed a great performance leap. Compared with traditional methods (e.g., interpolation (Keys, 1981) and sparse coding (Yang et al., 2008)), SR networks can achieve better performance with improved efficiency.
However, even if we have benefited a lot from the powerful CNNs, we have little knowledge about what happens in SR networks and what distinguishes them from traditional approaches on earth. Does the performance gain merely come from more complex mapping functions? Or is there anything different inside SR networks, like classification networks with discriminative capability? On the other hand, as a classic regression task, SR is expected to perform a continuous mapping from low-resolution (LR) to high-resolution (HR) images. It is generally a local operation without the consideration of the global context. But with the introduction of GAN-based models Ledig et al. (2017); Wang et al. (2018), more delicate SR textures can be generated. It seems that the network has learned some kind of semantic, which is beyond our common perception for regression tasks.
Then, we may raise the question: are there any “semantics” in SR networks? If yes, do these semantics have different definitions from those in classification networks? Existing literature cannot answer these questions, as there is little research on interpreting low-level vision deep models. Nevertheless, discovering the semantics in SR networks is of great importance. It can not only help us further understand the underlying working mechanisms, but also guide us to design better networks and evaluation algorithms.
In this study, we give affirmative answers to the above questions by unfolding the semantics hidden in super-resolution networks. Specifically, different from the artificially predefined semantics associated with object classes in high-level vision, semantics in SR networks are distinct in terms of image degradation instead of image content. Accordingly, we name such semantics deep degradation representations (DDR). More interestingly, such degradation-related semantics are spontaneously existing without any predefined labels. We reveal that a well-trained deep SR network is naturally a good descriptor of degradation information.
Notably, the semantics in this paper have different implications from those in high-level vision. Previously, researchers have disclosed the hierarchical nature of classification networks (Zeiler & Fergus, 2014; Gu et al., 2018). As the layer deepens, the learned features respond more to abstract high-level patterns (e.g., faces and legs), showing a stronger discriminability to object categories (see Fig. 4). However, similar research in low-level vision is absent, since there are no predefined semantic labels. In this paper, we reveal the differences in deep “semantics” between classification and SR networks, as illustrated in Fig. 1.
Our observation stems from a representative blind SR method – CinCGAN Yuan et al. (2018), and we further extend it to more common SR networks – SRResNet and SRGAN Ledig et al. (2017). We have also revealed more interesting phenomena to help interpret the semantics, including the analogy to classification networks and the influential factors for extracting DDR. Moreover, we improve the results of several tasks by exploiting DDR. We believe our findings could lay the groundwork for the interpretability of SR networks, and inspire more exploration of the mechanism of low-level vision deep models.
Contributions. 1) We have successfully discovered the “semantics” in SR networks, denoted as deep degradation representations (DDR). Through in-depth analysis, we also find that global residual learning and adversarial learning can facilitate the SR network to extract such degradation-related representations. 2) We reveal the differences in deep representations between classification and SR networks, for the first time. This further expands our knowledge of the deep representations of highand low-level vision models. 3) We exploit our findings to several fundamental tasks and achieve very appealing results, including distortion identification, blind SR and generalization evaluation.
2 RELATED WORK
Super-resolution. Super-resolution (SR) is a fundamental task in low-level vision, which aims to reconstruct the high-resolution (HR) image from the corresponding low-resolution (LR) counterpart. SRCNN (Dong et al., 2014) is the first proposed CNN-based method for SR. Since then, a large number of deep-learning-based methods have been developed (Dong et al., 2016; Lim et al., 2017; Zhang et al., 2018b; Ledig et al., 2017; Zhang et al., 2019). Generally, current CNN-based SR methods can be categorized into two groups. One is MSE-based method, which targets at minimizing the distortion (e.g., Mean Square Error) between the ground-truth HR image and super-resolved image to yield high PSNR values, such as SRCNN (Dong et al., 2014), VDSR (Kim et al., 2016), EDSR (Lim et al., 2017), RCAN (Zhang et al., 2018b), SAN (Dai et al., 2019), etc. The other is GAN-based method, which incorporates generative adversarial network (GAN) and perceptual loss (Johnson et al., 2016) to obtain perceptually pleasing results, such as SRGAN (Ledig et al., 2017),
Input CinCGAN (DIV2K-mild)
(a)
(b)
(c)
BM3D SRCNN (DIV2K-mild)
DIV2K-mild
DIV2K-noise
Hollywood
Figure 2: Different degraded input images and their corresponding outputs produced by CinCGAN (Yuan et al., 2018), BM3D (Dabov et al., 2007), and SRCNN (Dong et al., 2014). CinCGAN (Yuan et al., 2018) is trained on DIV2K-mild dataset in an unpaired manner. If the input image conforms to the training data distribution, CinCGAN will generate better restoration results than BM3D (a). Otherwise, it tends to ignore the unseen degradation types (b)&(c). On the other hand, the traditional method BM3D (Dabov et al., 2007) has stable performance and similar denoising effects on all input images, regardless of the input degradation types. Zoom in for the best view.
ESRGAN (Wang et al., 2018), RankSRGAN (Zhang et al., 2019), SROBB (Rad et al., 2019). Recently, blind SR has attracted more and more attention (Gu et al., 2019; Bell-Kligler et al., 2019; Luo et al., 2020; Wang et al., 2021),which aims to solve SR with unknown real-world degradation. A comprehensive survey for blind SR is newly proposed (Liu et al., 2021), which summarizes existing methods. We regard SR as a representative research object and study its deep semantic representations. It can also draw inspirations on other low-level vision tasks.
Network interpretability. At present, most existing works on neural network interpretability focus on high-level vision tasks, especially for image classification. Zhang et al. (Zhang et al., 2020) systematically reviewed existing literature on network interpretability and proposed a novel taxonomy to categorize them. Here we only discuss several classic works. By adopting deconvolutional networks (Zeiler et al., 2010), Zeiler et al. (Zeiler & Fergus, 2014) projected the downsampled lowresolution feature activations back to the input pixel space, and then performed a sensitivity analysis to reveal which parts of the image are important for classification. Simonyan et al. (Simonyan et al., 2013) generated a saliency map from the gradients through a single backpropagation pass. Based on class activation maps (CAM) (Zhou et al., 2016), Selvaraju et al. (Selvaraju et al., 2017) proposed Grad-CAM (Gradient-weighted CAM) to produce a coarse-grained attribution map of the important regions in the image, which was broadly applicable to any CNN-based architecture. For more information about the network interpretability literature, please refer to the survey paper (Zhang et al., 2020). However, for low-level vision tasks, similar researches are rare. Recently, the local attribution map (LAM) (Gu & Dong, 2021) has been proposed to interpret super-resolution networks, which can be used to localize the input features that influenced the network outputs. Besides, Wang et al. (Wang et al., 2020b) presented a pioneer work that bridges the representation relationship between high- and low-level vision. They learned the mapping between deep representations of lowand high-quality images, and leveraged it as a deep degradation prior (DDP) for low-quality image classification. Inspired by these previous works, we interpret SR networks from another new perspective. We dive into their deep feature representations, and discover the “semantics” of SR networks. More background knowledge is described in the supplementary file.
3 MOTIVATION
To begin with, we present an interesting phenomenon, which drives us to start exploring the deep representations of SR networks. It is well known that SR networks are superior to traditional methods in specific scenarios, but are inferior in generalization ability. In blind SR, the degradation types of the input test images are unknown. For traditional methods, they treat different images equally without distinction of degradation types, thus their performance is generally stable and predictable. How about the SR networks, especially those designed for blind SR?
CinCGAN (Yuan et al., 2018) is a representative solution for real-world SR without paired training data. It maps a degraded LR to its clean version using data distribution learning before conducting SR operation. However, we find that it still has a limited application scope even if CinCGAN is developed for blind settings. If the degradation of the input image is not included in the training data, CinCGAN will fail to transfer the degraded input to a clean one. More interestingly, instead of producing extra artifacts in the image, it seems that CinCGAN does not process the input image and retains all the original defects. Readers can refer to Fig. 2 for an illustration, where CinCGAN performs well on the testing image of the DIV2K-mild dataset (same distribution as its training data), but produces unsatisfactory results for other different degradation types. In other words, the network seems to figure out the specific degradation types within its training data distribution, and distribution mismatch may make the network “turn off” its ability. This makes the performance of CinCGAN unstable and unpredictable. For comparison, we process the above three types of degraded images by a traditional denoising method BM3D (Dabov et al., 2007) 1. The visual results show that BM3D has an obvious and stable denoising performance for all different degradation types. Although the results of BM3D may be mediocre (the image textures are largely over-smoothed), it does take effect on every input image. This observation reveals a significant discrepancy between traditional methods and SR networks.
The above interesting phenomenon indicates that the deep network has learned more than a regression function, since it demonstrates the ability to distinguish among different degradation types. Inspired by this observation, we try to find any semantics hidden in SR networks.
4 DIVING INTO THE DEEP DEGRADATION REPRESENTATIONS
4.1 DISCRIMINABILITY OF DEEP REPRESENTATIONS IN DEEP SR NETWORKS
Feature projection and visualization. Since the final outputs are always derived from features in CNN layers, we start the exploration with feature maps, especially the deep ones potentially with more global and abstract information. To interpret the deep features of CNN, one common and rational way is to convert the high-dimensional CNN feature maps into lower-dimensional datapoints that can be visualized in a scatterplot. Afterwards, one can intuitively understand the data structures and manifolds. Specifically, we adopt t-Distributed Stochastic Neighbor Embedding (t-SNE) (Van der Maaten & Hinton, 2008) for dimensionality reduction. This algorithm is commonly used in manifold learning, and it has been successfully applied in previous works (Donahue et al., 2014; Mnih et al., 2015; Wen et al., 2016; Zahavy et al., 2016; Veličković et al., 2017; Wang et al., 2020b; Huang et al., 2020) for feature projection and visualization. In our experiments, we first reduce the dimensionality of feature maps to a reasonable amount (50 in this paper) using PCA (Hotelling, 1933), then apply t-SNE to project the 50-dimensional representation to two-dimensional space, after which the results are visualized in a scatterplot. Furthermore, we also introduce CHI (Caliński & Harabasz, 1974) score to quantitatively evaluate the distributions of visualized datapoints. The CHI score is higher when clusters are well separated, which indicates stronger semantic discriminability.
What do the deep features of SR networks represent? As discussed in Sec.3, since CinCGAN performs differently on various degradations, we compare the features generated from three testing datasets: 1) DIV2K-mild: training and testing data used in CinCGAN, which are synthesized
1Note that BM3D is a denoising method while CinCGAN is able to upsample the resolution of the input image. Thus, after applying BM3D, we apply bicubic interpolation to unify the resolution of the output image. This is reasonable as we only evaluate their denoising effects.
from DIV2K (Agustsson & Timofte, 2017) dataset, containing noise, blur, pixel shifting and other degradations. 2) DIV2K-noise20: add Gaussian noise (σ = 20) to DIV2K set. 3) Hollywood100: 100 images selected from Hollywood dataset (Laptev et al., 2008), containing real-world old film degradations. Each test dataset includes 100 images.
As shown in Fig. 3(a), there is a strong feature discriminability for various degradations. Images with aligned contents but different degradation types are still separated into different clusters. 2 This phenomenon conforms to our observation that CinCGAN does treat various input degradations in different ways. It naturally reveals the “semantics” of deep representations in CinCGAN, which are closely related to the degradation types rather than the image content. For comparison, we may wonder whether traditional methods have similar behaviors (or ”semantics”). However, our feature analysis method can only work for deep models, which contain hierarchical feature maps. It is acknowledged that the simplest network – SRCNN can be analogous to a sparse-coding-based method, thus we can use SRCNN to shed light on the behaviors of traditional methods. We train an SRCNN3 with the same data as CinCGAN, and visualize the feature representations of the last layer in Fig. 3(b). It is obvious that different degradations cannot be clearly separated. This phenomenon is completely different from CinCGAN. We can conjecture that the degradation-related semantics only exist in deep models, not traditional methods or shallow networks. More analysis on shallow networks can be found in the supplementary file.
From CinCGAN to Generic SRGAN. Notably, the training of CinCGAN involves degraded images (DIV2K-mild). It actually performs simultaneous restoration and SR. We also wonder how this kind of degradation-related semantics manifests in classical SR networks (without exposure to other degradation types except for downsampling). Therefore, we adopt a generic GAN-based SR network SRGAN (Ledig et al., 2017; Wang et al., 2018) to conduct the visualization experiment. SRGAN is trained with DIV2K dataset (Agustsson & Timofte, 2017) with only bicubic-downsampled LR images. According to the common degradation modelling in low-level vision, we use three datasets with different degradation types for testing: 1) DIV2K-clean: the original DIV2K validation set containing only bicubic downsampling degradation, which conforms to the training data distribution. 2) DIV2K-blur: introduce blurring degradation with Gaussian blur kernel on the DIV2K-clean set. The kernel width is randomly sampled from [2, 4] for each image and the kernel size is fixed to 15×15. 3) DIV2K-noise: add Gaussian noises to the DIV2K-clean set. The noise level is randomly sampled from [5, 30] for each image. These three testing datasets are aligned in image content but different in degradation types.
As shown in Fig.3(d), a clustering trend similar to CinCGAN is clearly demonstrated. This provides stronger evidence for the existence of degradation-related semantics. Even if the three testing sets share the same content, they are still separated into distinct clusters according to the degradation types. In the supplementary file, similar phenomena are observed with other network structures. Note again, shallow SRCNN does not have such feature discriminability (see Fig.3(c)).
There, we successfully find the semantics hidden in deep SR networks. They are perceivable to humans when visualized in low-dimensional space. Specifically, semantics in deep SR networks are in terms of degradation types regardless of the image contents. Simply but vividly, we name this kind of semantics as deep degradation representations (DDR).
Is DDR a natural and trivial observation? No, there are three reasons. First, DDR has never been discussed before. The function of deep SR networks is beyond simple regression. The learned deep features can spontaneously characterize the image degradations, indicating that a well-trained deep SR network is naturally a good descriptor of degradation information. Note again that the deep SR networks have not observed any blurry or noisy data during training, but still have the discriminative ability on different degradations. Second, DDR in SR is not simply caused by different input patterns. We find that different networks will learn different semantic representations. For example, in Sec. 4.2, we reveal the differences in the learned representations between classification and SR Networks. In Sec. 4.3, we show that not all SR network structures can easily obtain DDR. DDR does not exist in specific cases and shallow networks. Third, DDR has important applications and inspirations. It can not only expand our understanding of the underlying mechanisms of low-level
2Note that the class labels in the scatterplots are only used to assign a color/symbol to the datapoints for better visualization.
3We use the same architecture as the original paper Dong et al. (2014) and add global residual for better visualization.
vision models, but also help promote the development of other tasks. In Sec. 5, we apply DDR to several fundamental tasks and achieve appealing results, implying the great potential of DDR.
4.2 DIFFERENCES IN SEMANTICS BETWEEN CLASSIFICATION AND SR NETWORKS
In the high-level vision, classification is one of the most representative tasks, where artificially predefined semantic labels on object classes are given as supervision. We choose ResNet18 (He et al., 2016) as the classification backbone and conduct experiments on CIFAR10 dataset (Krizhevsky et al., 2009). We extract the forward features of each input testing image4 at different network layers, as described in Fig. 3(e)-a.
Fig. 4 shows that as the network deepens, the extracted feature representations produce obvious discriminative clusters, i.e., the learned features are increasingly becoming semantically discriminative. Such discriminative semantics in classification networks are coherent with the artificially predefined labels. This is an intuitive and natural observation, on which lots of representation and discriminative learning methods are based (Wen et al., 2016; Oord et al., 2018; Lee et al., 2019; Wang et al., 2020b).
Further, we add blur and noise degradation to the CIFAR10 test images, and then investigate the feature representations of classification and SR networks. Note that no degradation is added to the training data. As shown in Fig. 5, after adding degradations to the test data, the deep representations obtained by the classification network (ResNet18) are still clustered by object categories, indicating that the features focus more on high-level object class information. On the contrary, the deep representations obtained by SR networks (SRResNet and SRGAN) are clustered with regard to degradation types. The features of the same object category are not clustered together, while those of the same degradation type are clustered together, showing different “semantic” discriminability. This phenomenon intuitively illustrates the differences in the deep semantic representations between SR and classification networks, i.e., degradation-related semantics and content-related semantics. More interestingly, the “semantics” in SR networks exists naturally, because the SR networks only see clean data without any input or labelled degradation information.
4.3 HOW DO GLOBAL RESIDUAL AND ADVERSARIAL LEARNING AFFECT THE DEEP REPRESENTATIONS?
Previously, we have elaborated on the deep degradation representations in CinCGAN, SRGAN and SRResNet. Nevertheless, we further discover that no arbitrary SR network structure has such a property. To be specific, we find two crucial factors that can influence the learned representations: i) image global residual (GR), and ii) generative adversarial learning (GAN).
4For efficiency, we selected 100 testing images of each category (1000 images in total).
Global Residual. We train two SRResNet networks – SRResNet (with global residual) and SRResNet-woGR (without global residual), as shown in Fig. 3. The two architectures are both common in practice (Kim et al., 2016; Shi et al., 2016). DIV2K (Agustsson & Timofte, 2017) dataset is used for training, where the LR images are bicubic-downsampled and clean. Readers can refer to the supplementary file for more details. After testing, the feature visualization analysis is shown in Fig. 6.
The results show that for MSE-based SR method, GR is essential for producing discriminative representations on degradation types. The features in “ResBlock16” of SRResNet have shown distinct discriminability, where the clean, blur, and noise data are clustered separately. On the contrary, SRResNet-woGR shows no discriminability even in deep layers. This phenomenon reveals that GR significantly impacts the learned feature representations. It is inferred that learning the global residual could remove most of the content information and make the network concentrate more on the contained degradation. This claim is also corroborated by visualizing the feature maps in the supplementary file.
Adversarial Learning. MSE-based and GAN-based methods are currently two prevailing trends in CNN-based SR methods. Previous studies only reveal that the output images of MSE-based and GAN-based methods are different, but the differences between their feature representations are rarely discussed. Since their learning mechanisms are quite different, will there be a discrepancy in their deep feature representations? We directly adopt SRResNet and SRResNet-woGR as generators. Consequently, we build two corresponding GAN-based models, namely SRGAN and SRGAN-woGR. After training, we perform the same test and analysis process mentioned earlier.
The results show that the deep features are bound to be discriminative to degradation types for the GAN-based method, whether there is GR or not. As shown in Fig. 7(d)(h), the deep representations in “ResBlock16” of SRGAN-woGR have already been clustered according to different degradation types. This suggests that the learned deep representations of MSE-based method and GAN-based method are dissimilar. Adversarial learning can help the network learn more informative features for distinguishing image degradation rather than image content.
4.4 HOW DOES DDR EVOLVE THROUGH THE TRAINING PROCESS?
We also reveal the relationship between the model performance and DDR discriminability. We select SRResNet models with different training iterations for testing. We report the model performance
on DIV2K-clean validation dataset and calculate the CHI scores to evaluate its discriminability with clean, blur and noise data. As shown in Fig. 8, as the training process goes, the performance of the model is improved, while the feature discriminability for degradation is also enhanced. From random initialization to 700k iterations, the CHI score increases significantly from 0.00 to 591.68, while the PSNR value improves by 2.87dB (Due to GR, the initial PSNR value is relatively high). The training data only include clean LR images, but the trained model has the ability to discriminate unseen degradation types. This clearly implies that a well-trained deep SR network is naturally a good descriptor of degradation information.
4.5 FURTHER DISCUSSION ON THE CAUSES OF DDR PHENOMENON
In the previous sections, we reveal several important factors that promote the manifestation of DDR phenomenon, including global residual, adversarial learning (Sec. 4.3) and training iterations (Sec. 4.4). Based on the above findings and more visualization results, we can analyze the causes of DDR more deeply. We visualize the feature maps of SRResNet-wGR, SRResNet-woGR, SRGAN-wGR and SRGAN-woGR on test images with different degradations in the Appendix.
The DDR phenomenon is mainly introduced by overfitting the degradation in the training data. Specifically, since the training data (DIV2K-clean) do not contain extra degradations, the trained SR network lacks the ability to deal with the unseen degradations. When feeding images with degradations (e.g., noise and blur), it will produce features with unprocessed noises or blurring. These patterned features naturally show a strong discriminability between different degradations. As for GR, models with GR produce features that contain less components of original content information. GR can help remove the redundant image content information and make the network concentrate more on degradation-related information. GAN training also enhances the high-frequency degradation information. Besides, prolonging the training iterations and deepening the network depth will make the network further overfit to the training data.
4.6 WHY SR NETWORKS CAN HARDLY GENERALIZE TO UNSEEN DEGRADATIONS?
Classical SR models (Dong et al., 2014; Lim et al., 2017) assume that the input LR images are generated by fixed downsampling kernel (e.g., bicubic). However, it is difficult to apply such simple SR models to real scenarios with unknown degradations. We claim that SR and restoration networks learn to overfit the distribution of degradations, rather than the distribution of natural clean images.
To verify our statements, we compare the representations between SRGAN-wGR models trained on clean data and clean+noise data, respectively. As presented in Fig. 9, if the model is trained only on clean LR data, the deep representations show strong discriminability to clean and noise data. In contrast, if the model sees noise data during training, such discriminability diminishes. The model will become more robust to more degradation types, as the distributions of the deep representations become unanimous. In summary, to improve the model generalization for various degradations, we need to diminish the feature discriminability to degradations. Adding more degraded data into training is a plausible way to enhance the generalization.
5 APPLICATIONS AND INSPIRATIONS
Image Distortion Identification Using DDR Features. Image distortion identification (Liang et al., 2020) is an important subsidiary pretreatment for many image processing systems, especially for image quality assessment (IQA). It aims to recognize the distortion type from the distorted images, so as to facilitate the downstream tasks (Mittal et al., 2012a; Gu et al., 2019; Liang et al., 2020). Previous methods usually resort to design handcrafted features that can distinguish different degradation types (Mittal et al., 2012a;b) or train a classification model via supervised learning (Kang et al., 2014; Bosse et al., 2017; Liang et al., 2020). Since DDR is related to image degradation, it can naturally be used as an excellent prior feature for image distortion identification. To obtain DDR, we do not need any degradation information but only a well-trained SR model (train on clean data). Following BRISQUE (Mittal et al., 2012a), we adopt the deep representations of SRGAN as input features (using PCA to reduce the original features to a 120-dimensional vector), and then use linear SVM to classify the degradation types of LIVE dataset (Sheikh et al., 2006). As shown in Tab. 1, compared with BRISQUE and MLLNet (Liang et al., 2020), DDR features achieve excellent results on recognizing different distortion types. More inspiringly, DDR is not obtained by any distortion-related supervision.
Blind SR with DDR Guidance. To super-resolve real images with unknown degradations, many blind SR methods resort to estimating and utilising the degradation information. For instance, IKC (Gu et al., 2019) iteratively corrects the estimated blur kernel, and DASR (Wang et al., 2021) implicitly learns the degradation representations by contrastive learning. Based on the findings of DDR, we adopt a trained SRGAN model to extract degradation embedding to promote blind SR models. RRDBNet (Wang et al., 2018) is adopted as the backbone. The DDR embedding is injected into each RRDB module by the StyleMod Karras et al. (2020) (see Fig. 10). The training data are described in Tab. 2, e.g., “b+n” means that the training data include blur and noise images. DDR guidance can help improve the model performance. Fig. 11 reveals that DDR guidance can make the deep features become more homogeneous (CHI scores drop from 14.04 to 4.95).
6 CONCLUSIONS
In this paper, we discover the deep degradation representations in deep SR networks, which are different from high-level vision networks. We demonstrate that a well-trained deep SR network is naturally a good descriptor of degradation information. We reveal the differences in deep representations between classification and SR networks. We draw a series of interesting observations on the intrinsic features of deep SR networks, such as the effects of global residual and adversarial learning. Further, we apply DDR to several fundamental tasks and achieve appealing results. The exploration on DDR is of great significance and inspiration for relevant work.
A APPENDIX
A.1 BACKGROUND
Since the emergence of deep convolutional neural network (CNN), a large number of computer vision tasks have been drastically promoted, including high-level vision tasks such as image classification Russakovsky et al. (2015); Simonyan & Zisserman (2015); He et al. (2016); Huang et al. (2017); Hu et al. (2018), object localization Ren et al. (2015); He et al. (2017); Redmon et al. (2016) and semantic segmentation Long et al. (2015); Badrinarayanan et al. (2017); Chen et al. (2017); Wang et al. (2020a), as well as low-level vision tasks such as image super-resolution Dong et al. (2014); Ledig et al. (2017); Wang et al. (2018); Zhang et al. (2019); Dai et al. (2019), denoising Zhang et al. (2017; 2018a); Gu et al. (2019); Quan et al. (2020), dehazing Cai et al. (2016); Zhang & Patel (2018); Dong et al. (2020); Deng et al. (2020a), etc. However, an interesting phenomenon is that even if we have successfully applied CNNs to many tasks, yet we still do not have a thorough understanding of its intrinsic working mechanism.
To better understand the behaviors of CNN, many efforts have been put in the neural network interpretability for high-level vision Simonyan et al. (2013); Samek et al. (2017); Zeiler & Fergus (2014); Selvaraju et al. (2017); Montavon et al. (2018); Karpathy et al. (2015); Mahendran & Vedaldi (2016); Zhang et al. (2020); Adebayo et al. (2018). Most of them attempt to interpret the CNN decisions by visualization techniques, such as visualizing the intermediate feature maps (or saliency maps and class activation maps) Simonyan et al. (2013); Zeiler & Fergus (2014); Adebayo et al. (2018); Zhou et al. (2016); Selvaraju et al. (2017), computing the class notion images which maximize the class score Simonyan et al. (2013), or projecting feature representations Wen et al. (2016); Wang et al. (2020b); Zhu et al. (2018); Huang et al. (2020). For high-level vision tasks, especially image classification, researchers have established a set of techniques for interpreting deep models and have built up a preliminary understanding of CNN behaviors Gu et al. (2018). One representative work is done by Zeiler et al. Zeiler & Fergus (2014), who reveal the hierarchical nature of CNN by visualizing and interpreting the feature maps: the shallow layers respond to low-level features such as corners, curves and other edge/color conjunctions; the middle layers capture more complex texture combinations; the deeper layers are learned to encode more abstract and class-specific patterns, e.g., faces and legs. These patterns can be well interpreted by human perception and help partially explain the CNN decisions for high-level vision tasks.
As for low-level vision tasks, however, similar research work is absent. The possible reasons are as follows. In high-level vision tasks, there are usually artificially predefined semantic labels/categories. Thus, we can intuitively associate feature representations with these labels. Nevertheless, in low-level vision tasks, there is no explicit predefined semantics, making it hard to map the representations into a domain that the human can make sense of. Further, high-level vision usually performs classification in a discrete target domain with distinct categories, while low-level vision aims to solve a regression problem with continuous output values. Hence, without the guidance of predefined category semantics, it seems not so straightforward to interpret low-level vision networks.
In this paper, we take super-resolution (SR), one of the most representative tasks in low-level vision, as research object. Previously, it is generally thought that the features extracted from the SR network have no specific “semantic” information, and the network simply learns some complex non-linear functions to model the relations between network input and output. Are CNN features SR networks really in lack of any semantics? Can we find any kind of “semantics” in SR networks? In this paper, we aim to give an answer to these questions. We reveal that there are semantics existing in SR networks. We first discover and interpret the “semantics” of deep representations in SR networks. But different from high-level vision networks, such semantics relate to the image degradation types and degrees. Accordingly, we designate the deep semantic representations in SR networks as deep degradation representations (DDR).
A.2 LIMITATIONS
In this paper, we only explore the deep representations of SR networks. Other low-level vision networks are also worth exploring. We apply DDR to three tasks without too elaborate design in the application parts. For blind SR, we make a simple attempt to improve the model performance. The design is not optimal. We believe that there should be a more efficient and effective way to utilize DDR. For generalization evaluation, DDR can only evaluate the model generalization under constrained conditions. It shows the possibility of designing a generalization evaluation metric, but there is still a long way to realize this goal.
A.3 DEEP REPRESENTATIONS OF REAL-WORLD IMAGES
In the main paper, we mainly conduct experiments on synthetic degradations. The difficulty of realworld dataset is that it is hard to keep the content the same but change the degradations. If we simply use two real-world datasets which contains different contents and different degradations, it is hard to say whether the feature discriminability is targeted at image content or at image degradation. Hence, synthetic data at least can control the variables.
In addition, we find a plausible real-world dataset Real-City100, which is proposed in paper Cameral SR. The authors use iPhoneX and NikonD5500 devices to capture controllable images. By adjusting the cameral focal length, each camera captures paired images with the same content but different resolutions. The low-resolution images contain real-world degradations such as real noise and real
blur. We test SRGAN on this dataset and obtain corresponding visualization results, as shown in 12. It can be seen that the deep representations of SRGAN can still distinguish among different degradations across different devices.
A.4 CLASSIFICATION VS. SUPER-RESOLUTION
A.4.1 FORMULATION
Classification. Classification aims to categorize an input image X into a discrete object class:
Ŷ = GCL(X), (1)
where GCL represents the classification network, and Ŷ ∈ RC is the predicted probability vector indicating which of the C categoriesX belongs to. In practice, cross-entropy loss is usually adopted to train the classification network:
CE(Y, Ŷ ) = − C∑ i=1 yilogŷi, (2)
where Y ∈ RC is a one-hot vector representing the ground-truth class label. ŷi is the i-th row element of Ŷ , indicating the predicted probability that X belongs to the i-th class.
Super-resolution. A general image degradation process can be model as follows: X = (Y ⊗ k) ↓s +n, (3)
where Y is the high-resolution (HR) image and ⊗ denotes the convolution operation. X is the degraded high-resolution (LR) image. There are three types of degradation in this model: blur kernel k, downsampling operation ↓s and additive noise n. Hence, super-resolution can be regarded as a superset of other restoration tasks like denoising and deblurring.
Super-resolution (SR) is the inverse problem of Equ. (3). Given the input LR image X ∈ RM×N , the super-resolution network attempts to produce its HR version:
Ŷ = GSR(X), (4)
where GSR represents the super-resolution network, Ŷ ∈ RsM×sN is the predicted HR image and s is the upscaling factor. This procedure can be regarded as a typical regression task. At present, there are two groups of method: MSE-based and GAN-based methods. The former one treats SR as a reconstruction problem, which utilizes pixel-wise loss such as L2 loss to achieve high PSNR values.
L2(Y, Ŷ ) = 1
r2NM rN∑ i=1 rM∑ j=1 ‖Yi,j − Ŷi,j‖22. (5)
This is the most widely used loss function in many image restoration tasks Dong et al. (2014); Lim et al. (2017); Zhang et al. (2018b;a); Cai et al. (2016); He et al. (2020). However, such loss tends to produce over-smoothed images. To generate photo-realistic SR results, the latter method incorporates adversarial learning and perceptual loss to benefit better visual perception. The optimization is expressed as following min-max problem:
min θGSR max θDSR EY∼pHR [logDSR(Y )]
+ EX∼pLR [log(1−DSR(GSR(X)))]. (6)
In such adversarial learning, a discriminatorDSR is introduced to distinguish super-resolved images from real HR images. Then, the generator loss is defined as:
LG = − logDSR(GSR(X)). (7)
From the formulation, we can clearly see that image classification and image super-resolution represent two typical tasks in machine learning: classification and regression. The output of the classification task is discrete, while the output of the regression task is continuous.
A.4.2 ARCHITECTURES
Due to the different output types, the CNN architectures of classification and super-resolution networks also differ. Generally, classification networks often contain multiple downsampling layers (e.g., pooling and strided convolution) to gradually reduce the spatial resolution of feature maps. After several convolutional and downsampling layers, there may be one or more fully-connected layers to aggregate global semantic information and generate a vector containing C elements. For the output layer, the SoftMax operator is frequently used to normalize the previously obtained vector into a probabilistic representation. Some renowned classification network structures include AlexNet Krizhevsky et al. (2012), VGG Simonyan & Zisserman (2015), ResNet He et al. (2016), InceptionNet Szegedy et al. (2015); Ioffe & Szegedy (2015); Szegedy et al. (2017), DenseNet Huang et al. (2017), SENetBadrinarayanan et al. (2017), etc.
Unlike classification networks, super-resolution networks usually do not rely on downsampling layers, but upsampling layers (e.g., bilinear upsampling, transposed convolution Zeiler et al. (2010) or subpixel convolution Shi et al. (2016)). Thus, the spatial resolution of feature maps would increase. Another difference is that the output of the SR network is a three-channel image, rather than an abstract probability vector. The well-known SR network structures include SRCNN Dong et al. (2014), FSRCNN Dong et al. (2016), SRResNet Ledig et al. (2017), RDN Zhang et al. (2018c), RCAN Zhang et al. (2018b), etc. An intuitive comparison of classification and SR networks in CNN architecture is shown in Fig. 18. We can notice that one is gradually downsampling, and the other is gradually upsampling, which displays the discrepancy between high-level vision and low-level vision tasks in structure designing.
Although there are several important architectural differences, classification networks and SR networks can share and adopt some proven effective building modules, like skip connection He et al. (2016); Lim et al. (2017) and attention mechanismHu et al. (2018); Zhang et al. (2018b).
A.5 IMPLEMENTATION DETAILS
In the main paper, we conduct experiments on ResNet18 He et al. (2016) and SRResNet/SRGAN Ledig et al. (2017). We elaborate more details on the network structures and training settings here.
For ResNet18, we directly adopt the network structure depicted in He et al. (2016). Cross-entropy loss (Eq. 2) is used as the loss function. The learning rate is initialized to 0.1 and decreased with a cosine annealing strategy. We apply SGD optimizer with weight decay 5×10−4. The trained model yields an accuracy of 92.86% on CIFAR10 testing set which consists of 10, 000 images.
For SRResNet-wGR/SRResNet-woGR, we stack 16 residual blocks (RB) as shown in Fig. 3 of the main paper. The residual block is the same as depicted in Wang et al. (2018), in which all the BN layers are removed. Two Pixel-shuffle layers Shi et al. (2016) are utilized to conduct upsampling in the network, while the global residual branch is upsampled by bilinear interpolation. L1 loss is adopted as the loss function. The learning rate is initialized to 2 × 10−4 and is halved at [100k, 300k, 500k, 600k] iterations. A total of 600, 000 iterations are executed.
For SRGAN-wGR/SRGAN-woGR, the generator is the same as SRResNet-wGR/SRResNet-woGR. The discriminator is designed as in Ledig et al. (2017). Adversarial loss (Eq. 7) and perceptual loss Johnson et al. (2016) are combined as the loss functions, which are kept the same as in Ledig et al. (2017). The learning rate of both generator and discriminator is initialized to 1×10−4 and is halved at [50k, 100k, 200k, 300k] iterations. A total of 600, 000 iterations are executed. For all the superresolution networks, we apply Adam optimizer Kingma & Ba (2014) with β1 = 0.9 and β2 = 0.99. All the training LR patches are of size 128 × 128. When testing, 32 × 32 patches are fed into the networks to obtain deep features. In practice, we find that the patch size has little effect on revealing the deep degradation representations. All above models are trained on PyTorch platform with GeForce RTX 2080 Ti GPUs.
For the experiment of distortion identification, we use the aforementioned trained models to conduct inferencing on the LIVE dataset Sheikh et al. (2006). We crop the central 96 × 96 patch of each image to feed into the SR networks and obtain the corresponding deep representations. Then, the deep representations of each image are reduced to 120-dimensional vector using PCA. Afterwards, the linear SVM is adopted as the classification tail. In practice, we find that the vector dimension can be even larger for better performance. Notably, unlike previous methods, the features here are not trained on any degradation related labels or signals. The SR networks are only trained using clean data. However, the deep representations can be excellent prior features for recognizing various distortion types. This is of great importance and very encouraging.
A.6 DEFINITIONS OF WD, BD AND CHI
In Sec. 3.1 of the main paper, we describe the adopted analysis method on deep feature representations. Many other literatures also have adopted similar approaches to interpret and visualize the deep models, such as Graph Attention Network Veličković et al. (2017), Recurrent Networks Karpathy et al. (2015), Deep Q-Network Zahavy et al. (2016) and Neural Models in NLP Li et al. (2015). Most aforementioned researches adopt t-SNE as a qualitative analysis technique. To better illustrate and quantitatively measure the semantic discriminability of deep feature representations, we take a step further and introduce several indicators, which are originally used to evaluate the clustering performance, according to the data structure after dimensionality reduction by t-SNE. Specifically, we propose to adopt within-cluster dispersion (WD), between-clusters dispersion (BD) and CalinskiHarabaz Index (CHI) Caliński & Harabasz (1974) to provide some rough yet practicable quantitative measures for reference. For K clusters, WD, BD and CHI are defined as:
WD(K) = K∑ k=1 n(k)∑ i=1 ‖xik − x̄k‖2, (8)
where xik represents the i-th datapoint belonging to class k and x̄k is the average mean of all n(k) datapoints that belong to class k. Datapoints belonging to the same class should be close enough to each other and WD measures the compactness within a cluster.
BD(K) = K∑ k=1 n(k)‖x̄k − x̄‖2, (9)
where x̄ represents the average mean of all datapoints. BD measures the distance between clusters. Intuitively, larger BD value indicates stronger discriminability between different feature clusters. Given K clusters and N datapoints in total (N = ∑ k n(k)), by combining WD and BD, the CHI is formulated as:
CHI(K) = BD(K) WD(K) · (N −K) (K − 1) . (10)
It is represented as the ratio of the between-clusters dispersion mean and the within-cluster dispersion. The CHI score is higher when clusters are dense and well separated, which relates to a standard concept of a cluster.
Rationality of Using Quantitative Measures with t-SNE. Notably, t-SNE is not a numerical technique but a probabilistic one. It minimizes the Kullback-Leibler (KL) divergence between the dis-
tributions that measure pairwise similarities of the input high-dimensional data and that of the corresponding low-dimensional points in the embedding. Further, t-SNE is a non-convex optimization process which is performed using a gradient descent method, as a result of which several optimization parameters need to be chosen, like perplexity, iterations and learning rate. Hence, the reconstruction solutions may differ due to the choice of different optimization parameters and the initial random states. In this paper, we used exactly the same optimization procedure for all experiments. Moreover, we conduct extensive experiments using different parameters and demonstrate that the quality of the optima does not vary much from run to run, which is also emphasized in the t-SNE paper. To make the quantitative analysis more statistically solid, for each projection process, we run t-SNE five times and report the average and standard deviations of every metric.
A.7 FROM SHALLOW TO DEEP SR NETWORKS
In the main paper, we reveal that a shallow 3-layer SRCNN Dong et al. (2014) does not manifest representational discriminability on degradation types. Thus, we hypothesize that only deep SR networks possess such degradation-related semantics. To verify the statement, we gradually deepen the depth of SRCNN and observe how its deep representations change. We construct SRCNN models with different layer depths from shallow 3 layers to 13 layers. We train these models on DIV2Kclean data (inputs are only downsampled without other degradations) and test them on classical SR benchmarks. As shown in Tab. 4, the model achieves better SR performance with the increase of network depth, suggesting that deeper networks and more parameters can lead to greater learning capacity. On the other hand, the deep representations also gradually manifest discriminability on degradation types, as depicted in Fig. 14. When the model only has 3 layers, its representations cannot distinguish different degradation types. However, when we increase the depth to 13 layers, the deep representations begin to show discriminability on degradation types, with the CHI score increasing to 168.12.
A.8 MORE APPLICATIONS
Evaluating the Generalization Ability. According to the discussions in Sec. 4.6, DDR can be used as an approximate evaluation metric for generalization ability. Specifically, given a trained model and several test datasets with different degradations, we can obtain their DDR features. By
evaluating the discriminability of the projection results (clustering effect), we can roughly measure the generalization performance over different degradation types. The worse the clustering effect, the better the generalizability. Fig .11 shows the DDR clustering of different models. RRDB (clean) is unable to deal with degraded data and obtains lower PSNR values on blur and noise inputs. Its CHI score is 322.16. By introducing degraded data into training, the model gains better generalization and the CHI score is 14.04. With DDR guidance, the generalization ability is further enhanced. The CHI score decreases to 4.95. The results are consistent with the results in the previous section. Interestingly, we do not need ground-truth images to evaluate the model generalization. A similar attempt has been made in recent work Liu et al. (2022). Note that CHI is only a rough index, which cannot accurately measure the minor differences. DDR shows the possibility of designing a generalization evaluation metric, but there is still a long way to realize this goal.
A.9 EXPLORATION ON DIFFERENT DEGRADATION DEGREES
Previously, we introduce deep degradation representations by showing that the deep representations of SR networks are discriminative to different degradation types (e.g., clean, blur and noise). How about the same degradation type but with different degraded degrees? Will the deep representa-
tions still be discriminative to them? To explore this question, more experiments and analysis are performed.
We test super-resolution networks on degraded images with different noise degrees and blur degrees. The results are depicted in Table. 7 and Fig. 17. It can be seen that the deep degradation representations are discriminative not only to cross-degradation (different degradation types) but also to intra-degradation (same degradation type but with different degrees). This suggests that even for the same type of degradation, different degradation degrees will also cause significant differences in features. The greater the difference between degradation degrees, the stronger the discriminability of feature representations. This also reflects another difference between the representation semantics of super-resolution network and classification network. For classification, the semantic discriminability of feature representations is generally discrete, because the semantics are associated with discrete object categories. Nevertheless, there appears to be a spectrum (continuous transition) for the discriminability of the deep degradation representations, i.e., the discriminability has a monotonic relationship with the divergence between degradation types and degrees. For example, the degradation difference between noise levels 10 and 20 is not that much distinct, and the discriminability of feature representations is relatively smaller, comparing with noise levels 10 and 30.
From Table 7, there are notable observations. 1) Comparing with blur degradation, noise degradation is easier to be discriminated. Yet, it is difficult to obtain deep representations that have strong discriminability for different blur levels. Even for GAN-based method, global residual (GR) is indispensable to obtain representations that can be discriminative to different blur levels. 2) The representations obtained by GAN-based method have more discriminative semantics to degradation types and degrees than those of MSE-based method. 3) Again, global residual can strengthen the representation discriminability for degradations.
A.10 EXPLORATION OF NETWORK STRUCTURE
In the main paper, we choose ResNet18 He et al. (2016) and SRResNet/SRGAN Ledig et al. (2017) as the backbones of classification and SR networks, respectively. In order to eliminate the influence of different network structures, we design a unified backbone framework, which is composed of the
same basic building modules but connected with different tails for downsampling and upsampling to conduct classification and super-resolution respectively.
The unified architecture is shown in Fig. 18. To differ from the residual block in the main paper, we adopt residual channel attention layer as basic building block, which is inspired by SENet Hu et al. (2018) and RCAN Zhang et al. (2018b). For classification, the network tail consists of three maxpooling layers and a fully connected layer; for super-resolution, the network tail consists of two pixel-shuffle layers to upsample the feature maps. According to the conclusions in the main paper, we adopt global residual (GR) in the network design to obtain deep degradation representations (DDR). Except the network structure, all the training protocols are kept the same as in the main paper. The training details are the same as depicted in Sec. A.5. After training, the unified backbone framework for classification yields an accuracy of 92.08% on CIFAR10 testing set.
The experimental results are shown in Fig. 19, Fig. 20 and Tab. 8. From the results, we can see that the observations are consistent with the findings in the main paper. It suggests that the semantic representations do not stem from network structures, but from the task itself. Hence, our findings are not only limited to specific structures but are universal.
A.11 MORE INSPIRATIONS AND FUTURE WORK
Disentanglement of Image Content and Degradation In plenty of image editing and synthesizing tasks, researchers seek to disentangle an image through different attributes, so that the image can be finely edited Karras et al. (2019); Ma et al. (2018); Deng et al. (2020b); Lee et al. (2018); Nitzan et al. (2020). For example, semantic face editing Shen et al. (2020a;b); Shen & Zhou (2020) aims at manipulating facial attributes of a given image, e.g., pose, gender, age, smile, etc. Most methods attempt to learn disentangled representations and to control the facial attributes by manipulating the latent space. In low-level vision, the deep degradation representations can make it possible to decompose an image into content and degradation information, which can promote a number of new areas, such as degradation transferring and degradation editing. Further, more in-depth research on deep degradation representations will also greatly improve our understanding of the nature of images.
A.12 DISCUSSIONS ON DIMENSIONALITY REDUCTION
Among the numerous dimensionality reduction techniques (e.g., PCA Hotelling (1933), CCA Demartines & Hérault (1997), LLE Roweis & Saul (2000), IsomapTenenbaum et al. (2000), SNEHinton & Roweis (2002)), t-Distributed Stochastic Neighbor Embedding (t-SNE) Van der Maaten & Hinton (2008) is a widely-used and effective algorithm. It can greatly capture the local structure of the high-dimensional data and simultaneously reveal global structure such as the presence of clusters at several scales. Following Donahue et al. (2014); Mnih et al. (2015); Wen et al. (2016); Zahavy et al. (2016); Veličković et al. (2017); Wang et al. (2020b); Huang et al. (2020), we also take advantage of the superior manifold learning capability of t-SNE for feature projection.
In this section we further explain the effectiveness of adopting t-SNE and why we choose to project hign-dimensional features into two-dimensional datapoints. We first compare the projection results of PCA and t-SNE. From the results shown in Fig. 21, it can be observed that the projected features by t-SNE are successfully clustered together according the semantic labels, while the projected features by PCA are not well separated. It is because that PCA is a linear dimension reduction method which cannot deal with complex non-linear data obtained by the neural networks. Thus, t-SNE is a better choice to conduct dimension reduction on CNN features. This suggests the effectiveness of t-SNE for the purpose of feature projection. Note that we do not claim t-SNE is the optimal or the best choice for dimensionality reduction. We just utilize t-SNE as a rational tool to show the trend behind deep representations, since t-SNE has been proven effective and practical in our experiments and other literatures.
Then, we discuss the dimensions to reduce. We conduct dimensionality reduction to different dimensions. Since the highest dimension supported by t-SNE is 3, we first compare the effect between the two-dimensional projected features and the three-dimensional projected features by t-SNE. The qualitative and quantitative results are shown in Fig. 21 and Tab. 9. When we reduce the features to three dimensions, the reduced representations also show discriminability to semantic labels. How-
ever, quantitative results show that two dimensions can better portray the discriminability than three or higher dimensions. For PCA, the results are similar. With higher dimensions, the discriminability decrease. Hence, it is reasonable to reduce high-dimensional features into two-dimensional datapoints. Such settings are also adopted in Donahue et al. (2014); Wang et al. (2020b); Veličković et al. (2017); Huang et al. (2020), which are proven effective.
A.13 VISUALIZATION OF FEATURE MAPS
So far, we have successfully revealed the degradation-related semantics in SR networks with dimensionality reduction. In this section, we directly visualize the deep feature maps extracted from SR networks to provide some intuitive and qualitative interpretations. Specifically, we extract the feature maps obtained from four models (SRResNet-wGR, SRResNet-woGR, SRGAN-wGR and SRGAN-woGR) on images with different degradations (clean, blur4, noise20), respectively. Then we treat each feature map as a one channel image and plot it. The visualized feature maps are shown in Fig. 22. We select 8 feature maps with the largest eigenvalues for display. The complete results are shown in the supplementary file.
Influence of degradations on feature maps. From Fig. 22(a), we can observe that the deep features obtained by SRResNet-woGR portray various characteristics of the input image, including edges, textures and contents. In particular, we highlight in “red rectangles” the features that retain most of the image content. As shown in Fig. 22(b), after applying blur and noise degradations to the input image, the extracted features appear similar degradations as well. For blurred/noisy input images, the extracted feature maps also contain homologous blur/noise degradations.
Effect of global residual. In Sec. 4.3, we have revealed the importance and effectiveness of global residual (GR) for obtaining deep degradation representations for SR networks. But why GR is so
important? What is the role of GR? Through visualization, we can provide a qualitative and intuitive explanation here. Comparing Fig. 22(a) and Fig. 22(b), it can be observed that by adopting GR, the extracted features seem to contain less components of original shape and content information. Thus, GR can help remove the redundant image content information and make the network concentrate more on obtaining features that are related to low-level degradation information.
Effect of GAN. Previously, we have discussed the difference between MSE-based and GAN-based SR methods in their deep representations. We find that GAN-based method can better obtain feature representations that are discriminative to different degradation types. As shown in Fig. 22(a) and Fig. 22(c), the feature maps extracted by GAN-based method contain less object shape and content information compared with MSE-based method. This partially explains why the deep representations of GAN-based method are more discriminative, even without global residual. Comparing Fig. 22(c) and Fig. 22(d), when there is global residual, the feature maps containing the image original content information are further reduced, leading to stronger discriminability to degradation types.
A.14 SAMPLES OF DIFFERENT DATASETS
In the main paper, we adopt several different datasets to conduct experiments. Fig. 23 displays some example images from these datasets.
(a) DIV2K-clean: the original DIV2K Agustsson & Timofte (2017) dataset. The high-resolution (HR) ground-truth (GT) images have 2K resolution and are of high visual quality. The lowresolution (LR) input images are downsampled from HR by bicubic interpolation, without any further degradations.
(b) DIV2K-noise: adding Gaussian noises to DIV2K-clean LR input, thus making it contain extra noise degradation. DIV2K-noise20 means the additive Gaussian noise level σ is 20, where the number denotes the noise level.
(c) DIV2K-blur: applying Gaussian blur to DIV2K-clean LR input, thus making it contain extra blur degradation. DIV2K-blur4 means the Gaussian blur width is 4.
(d) DIV2K-mild: officially synthesized from DIV2K Agustsson & Timofte (2017) dataset as challenge dataset Timofte et al. (2017; 2018), which contains noise, blur, pixel shifting and other degradations. The degradation modelling is unknown to challenge participants.
(e) Hollywood100: 100 images selected from Hollywood dataset Laptev et al. (2008), containing real-world old film frames with unknown degradations, which may have compression, noise, blur and other real-world degradations.
Dataset (a), (b), (c) and (d) have the same image contents but different degradations. However, we find that the deep degradation representations (DDR) obtained by SR networks have discriminability to these degradation types, even if the network has not seen these degradations at all during training. Further, for real-world degradation like in (e), the DDR are still able to discern it. | 1. What is the focus and contribution of the paper on deep degradation representations (DDR)?
2. What are the strengths of the proposed approach, particularly in terms of its novelty and ability to generate delicate SR textures?
3. What are the weaknesses of the paper regarding its claims, experiments, and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any concerns or questions regarding the applicability of DDR to Transformer-based methods or its ability to indicate different levels of degradations? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes that SR network can learn the distinctive “semantics” named deep degradation representations (DDR). Considering through GAN-based models the SR network can generate more delicate SR textures, the authors believe that there are some kinds of semantic the SR network has learned. Through comprehensively analyzing the feature representations, the authors discover that the semantics in SR network is related to different degradations and two factors are adversarial learning and global residual. Experiments show that adversarial learning and global residual are important and DDR can be applied in many applications and achieves promising results.
Strengths And Weaknesses
Pros: Strength: 1 The paper is well-written and easy to read. 2 THis work is well motivated by the observation of applying pretrained SR models to images of different types of degradations. The idea about DDR is very interesting and seems novel to me. 3. Authors conduct extensive experiments to analyze the observation and give reasonable explanations.
Cons: 1 In this work CNN-based models such as SRResNet and SRGAN are adopted to illustrate the DDR generated by SR network. Currently Transformer-based SR network like SwinIR also achieves promising results. Does the observation about DDR also apply to the Transformer-based methods? 2. Do SR models with stronger SR performance perform better in indicating degradation? Can DDR also has the ability to indicate different levels of degradations? 3. In Fig. 3, why are training and test set for CinCGAN and SRGAN chosen to be different? It seems to me that the same setting is more meaningful. 4. In section5, injecting DDR embedding shows better performance, it is better to show some visual results for comparison to illusrate the advantage of DDR guidance. 5. DDR can indeed indicate the feature difference among the input with different degradations, however, is it reasonable to call DDR a kind of sematics?
Clarity, Quality, Novelty And Reproducibility
This work is well motivated and easy to follow. Extensive experiements are conducted to support their insightful analysis. |
ICLR | Title
Discovering Distinctive ``Semantics'' in Super-Resolution Networks
Abstract
Image super-resolution (SR) is a representative low-level vision problem. Although deep SR networks have achieved extraordinary success, we are still unaware of their working mechanisms. Specifically, whether SR networks can learn semantic information, or just perform complex mapping functions? What hinders SR networks from generalizing to real-world data? These questions not only raise our curiosity, but also influence SR network development. In this paper, we make the primary attempt to answer the above fundamental questions. After comprehensively analyzing the feature representations (via dimensionality reduction and visualization), we successfully discover the distinctive “semantics” in SR networks, i.e., deep degradation representations (DDR), which relate to image degradation instead of image content. We show that a well-trained deep SR network is naturally a good descriptor of degradation information. Our experiments also reveal two key factors (adversarial learning and global residual) that influence the extraction of such semantics. We further apply DDR in several interesting applications (such as distortion identification, blind SR and generalization evaluation) and achieve promising results, demonstrating the correctness and effectiveness of our findings.
1 INTRODUCTION
The emergence of deep convolutional neural network (CNN) has given birth to a large number of new solutions to low-level vision tasks (Dong et al., 2014; Zhang et al., 2017). Among these signs of progress, image super-resolution (SR) has enjoyed a great performance leap. Compared with traditional methods (e.g., interpolation (Keys, 1981) and sparse coding (Yang et al., 2008)), SR networks can achieve better performance with improved efficiency.
However, even if we have benefited a lot from the powerful CNNs, we have little knowledge about what happens in SR networks and what distinguishes them from traditional approaches on earth. Does the performance gain merely come from more complex mapping functions? Or is there anything different inside SR networks, like classification networks with discriminative capability? On the other hand, as a classic regression task, SR is expected to perform a continuous mapping from low-resolution (LR) to high-resolution (HR) images. It is generally a local operation without the consideration of the global context. But with the introduction of GAN-based models Ledig et al. (2017); Wang et al. (2018), more delicate SR textures can be generated. It seems that the network has learned some kind of semantic, which is beyond our common perception for regression tasks.
Then, we may raise the question: are there any “semantics” in SR networks? If yes, do these semantics have different definitions from those in classification networks? Existing literature cannot answer these questions, as there is little research on interpreting low-level vision deep models. Nevertheless, discovering the semantics in SR networks is of great importance. It can not only help us further understand the underlying working mechanisms, but also guide us to design better networks and evaluation algorithms.
In this study, we give affirmative answers to the above questions by unfolding the semantics hidden in super-resolution networks. Specifically, different from the artificially predefined semantics associated with object classes in high-level vision, semantics in SR networks are distinct in terms of image degradation instead of image content. Accordingly, we name such semantics deep degradation representations (DDR). More interestingly, such degradation-related semantics are spontaneously existing without any predefined labels. We reveal that a well-trained deep SR network is naturally a good descriptor of degradation information.
Notably, the semantics in this paper have different implications from those in high-level vision. Previously, researchers have disclosed the hierarchical nature of classification networks (Zeiler & Fergus, 2014; Gu et al., 2018). As the layer deepens, the learned features respond more to abstract high-level patterns (e.g., faces and legs), showing a stronger discriminability to object categories (see Fig. 4). However, similar research in low-level vision is absent, since there are no predefined semantic labels. In this paper, we reveal the differences in deep “semantics” between classification and SR networks, as illustrated in Fig. 1.
Our observation stems from a representative blind SR method – CinCGAN Yuan et al. (2018), and we further extend it to more common SR networks – SRResNet and SRGAN Ledig et al. (2017). We have also revealed more interesting phenomena to help interpret the semantics, including the analogy to classification networks and the influential factors for extracting DDR. Moreover, we improve the results of several tasks by exploiting DDR. We believe our findings could lay the groundwork for the interpretability of SR networks, and inspire more exploration of the mechanism of low-level vision deep models.
Contributions. 1) We have successfully discovered the “semantics” in SR networks, denoted as deep degradation representations (DDR). Through in-depth analysis, we also find that global residual learning and adversarial learning can facilitate the SR network to extract such degradation-related representations. 2) We reveal the differences in deep representations between classification and SR networks, for the first time. This further expands our knowledge of the deep representations of highand low-level vision models. 3) We exploit our findings to several fundamental tasks and achieve very appealing results, including distortion identification, blind SR and generalization evaluation.
2 RELATED WORK
Super-resolution. Super-resolution (SR) is a fundamental task in low-level vision, which aims to reconstruct the high-resolution (HR) image from the corresponding low-resolution (LR) counterpart. SRCNN (Dong et al., 2014) is the first proposed CNN-based method for SR. Since then, a large number of deep-learning-based methods have been developed (Dong et al., 2016; Lim et al., 2017; Zhang et al., 2018b; Ledig et al., 2017; Zhang et al., 2019). Generally, current CNN-based SR methods can be categorized into two groups. One is MSE-based method, which targets at minimizing the distortion (e.g., Mean Square Error) between the ground-truth HR image and super-resolved image to yield high PSNR values, such as SRCNN (Dong et al., 2014), VDSR (Kim et al., 2016), EDSR (Lim et al., 2017), RCAN (Zhang et al., 2018b), SAN (Dai et al., 2019), etc. The other is GAN-based method, which incorporates generative adversarial network (GAN) and perceptual loss (Johnson et al., 2016) to obtain perceptually pleasing results, such as SRGAN (Ledig et al., 2017),
Input CinCGAN (DIV2K-mild)
(a)
(b)
(c)
BM3D SRCNN (DIV2K-mild)
DIV2K-mild
DIV2K-noise
Hollywood
Figure 2: Different degraded input images and their corresponding outputs produced by CinCGAN (Yuan et al., 2018), BM3D (Dabov et al., 2007), and SRCNN (Dong et al., 2014). CinCGAN (Yuan et al., 2018) is trained on DIV2K-mild dataset in an unpaired manner. If the input image conforms to the training data distribution, CinCGAN will generate better restoration results than BM3D (a). Otherwise, it tends to ignore the unseen degradation types (b)&(c). On the other hand, the traditional method BM3D (Dabov et al., 2007) has stable performance and similar denoising effects on all input images, regardless of the input degradation types. Zoom in for the best view.
ESRGAN (Wang et al., 2018), RankSRGAN (Zhang et al., 2019), SROBB (Rad et al., 2019). Recently, blind SR has attracted more and more attention (Gu et al., 2019; Bell-Kligler et al., 2019; Luo et al., 2020; Wang et al., 2021),which aims to solve SR with unknown real-world degradation. A comprehensive survey for blind SR is newly proposed (Liu et al., 2021), which summarizes existing methods. We regard SR as a representative research object and study its deep semantic representations. It can also draw inspirations on other low-level vision tasks.
Network interpretability. At present, most existing works on neural network interpretability focus on high-level vision tasks, especially for image classification. Zhang et al. (Zhang et al., 2020) systematically reviewed existing literature on network interpretability and proposed a novel taxonomy to categorize them. Here we only discuss several classic works. By adopting deconvolutional networks (Zeiler et al., 2010), Zeiler et al. (Zeiler & Fergus, 2014) projected the downsampled lowresolution feature activations back to the input pixel space, and then performed a sensitivity analysis to reveal which parts of the image are important for classification. Simonyan et al. (Simonyan et al., 2013) generated a saliency map from the gradients through a single backpropagation pass. Based on class activation maps (CAM) (Zhou et al., 2016), Selvaraju et al. (Selvaraju et al., 2017) proposed Grad-CAM (Gradient-weighted CAM) to produce a coarse-grained attribution map of the important regions in the image, which was broadly applicable to any CNN-based architecture. For more information about the network interpretability literature, please refer to the survey paper (Zhang et al., 2020). However, for low-level vision tasks, similar researches are rare. Recently, the local attribution map (LAM) (Gu & Dong, 2021) has been proposed to interpret super-resolution networks, which can be used to localize the input features that influenced the network outputs. Besides, Wang et al. (Wang et al., 2020b) presented a pioneer work that bridges the representation relationship between high- and low-level vision. They learned the mapping between deep representations of lowand high-quality images, and leveraged it as a deep degradation prior (DDP) for low-quality image classification. Inspired by these previous works, we interpret SR networks from another new perspective. We dive into their deep feature representations, and discover the “semantics” of SR networks. More background knowledge is described in the supplementary file.
3 MOTIVATION
To begin with, we present an interesting phenomenon, which drives us to start exploring the deep representations of SR networks. It is well known that SR networks are superior to traditional methods in specific scenarios, but are inferior in generalization ability. In blind SR, the degradation types of the input test images are unknown. For traditional methods, they treat different images equally without distinction of degradation types, thus their performance is generally stable and predictable. How about the SR networks, especially those designed for blind SR?
CinCGAN (Yuan et al., 2018) is a representative solution for real-world SR without paired training data. It maps a degraded LR to its clean version using data distribution learning before conducting SR operation. However, we find that it still has a limited application scope even if CinCGAN is developed for blind settings. If the degradation of the input image is not included in the training data, CinCGAN will fail to transfer the degraded input to a clean one. More interestingly, instead of producing extra artifacts in the image, it seems that CinCGAN does not process the input image and retains all the original defects. Readers can refer to Fig. 2 for an illustration, where CinCGAN performs well on the testing image of the DIV2K-mild dataset (same distribution as its training data), but produces unsatisfactory results for other different degradation types. In other words, the network seems to figure out the specific degradation types within its training data distribution, and distribution mismatch may make the network “turn off” its ability. This makes the performance of CinCGAN unstable and unpredictable. For comparison, we process the above three types of degraded images by a traditional denoising method BM3D (Dabov et al., 2007) 1. The visual results show that BM3D has an obvious and stable denoising performance for all different degradation types. Although the results of BM3D may be mediocre (the image textures are largely over-smoothed), it does take effect on every input image. This observation reveals a significant discrepancy between traditional methods and SR networks.
The above interesting phenomenon indicates that the deep network has learned more than a regression function, since it demonstrates the ability to distinguish among different degradation types. Inspired by this observation, we try to find any semantics hidden in SR networks.
4 DIVING INTO THE DEEP DEGRADATION REPRESENTATIONS
4.1 DISCRIMINABILITY OF DEEP REPRESENTATIONS IN DEEP SR NETWORKS
Feature projection and visualization. Since the final outputs are always derived from features in CNN layers, we start the exploration with feature maps, especially the deep ones potentially with more global and abstract information. To interpret the deep features of CNN, one common and rational way is to convert the high-dimensional CNN feature maps into lower-dimensional datapoints that can be visualized in a scatterplot. Afterwards, one can intuitively understand the data structures and manifolds. Specifically, we adopt t-Distributed Stochastic Neighbor Embedding (t-SNE) (Van der Maaten & Hinton, 2008) for dimensionality reduction. This algorithm is commonly used in manifold learning, and it has been successfully applied in previous works (Donahue et al., 2014; Mnih et al., 2015; Wen et al., 2016; Zahavy et al., 2016; Veličković et al., 2017; Wang et al., 2020b; Huang et al., 2020) for feature projection and visualization. In our experiments, we first reduce the dimensionality of feature maps to a reasonable amount (50 in this paper) using PCA (Hotelling, 1933), then apply t-SNE to project the 50-dimensional representation to two-dimensional space, after which the results are visualized in a scatterplot. Furthermore, we also introduce CHI (Caliński & Harabasz, 1974) score to quantitatively evaluate the distributions of visualized datapoints. The CHI score is higher when clusters are well separated, which indicates stronger semantic discriminability.
What do the deep features of SR networks represent? As discussed in Sec.3, since CinCGAN performs differently on various degradations, we compare the features generated from three testing datasets: 1) DIV2K-mild: training and testing data used in CinCGAN, which are synthesized
1Note that BM3D is a denoising method while CinCGAN is able to upsample the resolution of the input image. Thus, after applying BM3D, we apply bicubic interpolation to unify the resolution of the output image. This is reasonable as we only evaluate their denoising effects.
from DIV2K (Agustsson & Timofte, 2017) dataset, containing noise, blur, pixel shifting and other degradations. 2) DIV2K-noise20: add Gaussian noise (σ = 20) to DIV2K set. 3) Hollywood100: 100 images selected from Hollywood dataset (Laptev et al., 2008), containing real-world old film degradations. Each test dataset includes 100 images.
As shown in Fig. 3(a), there is a strong feature discriminability for various degradations. Images with aligned contents but different degradation types are still separated into different clusters. 2 This phenomenon conforms to our observation that CinCGAN does treat various input degradations in different ways. It naturally reveals the “semantics” of deep representations in CinCGAN, which are closely related to the degradation types rather than the image content. For comparison, we may wonder whether traditional methods have similar behaviors (or ”semantics”). However, our feature analysis method can only work for deep models, which contain hierarchical feature maps. It is acknowledged that the simplest network – SRCNN can be analogous to a sparse-coding-based method, thus we can use SRCNN to shed light on the behaviors of traditional methods. We train an SRCNN3 with the same data as CinCGAN, and visualize the feature representations of the last layer in Fig. 3(b). It is obvious that different degradations cannot be clearly separated. This phenomenon is completely different from CinCGAN. We can conjecture that the degradation-related semantics only exist in deep models, not traditional methods or shallow networks. More analysis on shallow networks can be found in the supplementary file.
From CinCGAN to Generic SRGAN. Notably, the training of CinCGAN involves degraded images (DIV2K-mild). It actually performs simultaneous restoration and SR. We also wonder how this kind of degradation-related semantics manifests in classical SR networks (without exposure to other degradation types except for downsampling). Therefore, we adopt a generic GAN-based SR network SRGAN (Ledig et al., 2017; Wang et al., 2018) to conduct the visualization experiment. SRGAN is trained with DIV2K dataset (Agustsson & Timofte, 2017) with only bicubic-downsampled LR images. According to the common degradation modelling in low-level vision, we use three datasets with different degradation types for testing: 1) DIV2K-clean: the original DIV2K validation set containing only bicubic downsampling degradation, which conforms to the training data distribution. 2) DIV2K-blur: introduce blurring degradation with Gaussian blur kernel on the DIV2K-clean set. The kernel width is randomly sampled from [2, 4] for each image and the kernel size is fixed to 15×15. 3) DIV2K-noise: add Gaussian noises to the DIV2K-clean set. The noise level is randomly sampled from [5, 30] for each image. These three testing datasets are aligned in image content but different in degradation types.
As shown in Fig.3(d), a clustering trend similar to CinCGAN is clearly demonstrated. This provides stronger evidence for the existence of degradation-related semantics. Even if the three testing sets share the same content, they are still separated into distinct clusters according to the degradation types. In the supplementary file, similar phenomena are observed with other network structures. Note again, shallow SRCNN does not have such feature discriminability (see Fig.3(c)).
There, we successfully find the semantics hidden in deep SR networks. They are perceivable to humans when visualized in low-dimensional space. Specifically, semantics in deep SR networks are in terms of degradation types regardless of the image contents. Simply but vividly, we name this kind of semantics as deep degradation representations (DDR).
Is DDR a natural and trivial observation? No, there are three reasons. First, DDR has never been discussed before. The function of deep SR networks is beyond simple regression. The learned deep features can spontaneously characterize the image degradations, indicating that a well-trained deep SR network is naturally a good descriptor of degradation information. Note again that the deep SR networks have not observed any blurry or noisy data during training, but still have the discriminative ability on different degradations. Second, DDR in SR is not simply caused by different input patterns. We find that different networks will learn different semantic representations. For example, in Sec. 4.2, we reveal the differences in the learned representations between classification and SR Networks. In Sec. 4.3, we show that not all SR network structures can easily obtain DDR. DDR does not exist in specific cases and shallow networks. Third, DDR has important applications and inspirations. It can not only expand our understanding of the underlying mechanisms of low-level
2Note that the class labels in the scatterplots are only used to assign a color/symbol to the datapoints for better visualization.
3We use the same architecture as the original paper Dong et al. (2014) and add global residual for better visualization.
vision models, but also help promote the development of other tasks. In Sec. 5, we apply DDR to several fundamental tasks and achieve appealing results, implying the great potential of DDR.
4.2 DIFFERENCES IN SEMANTICS BETWEEN CLASSIFICATION AND SR NETWORKS
In the high-level vision, classification is one of the most representative tasks, where artificially predefined semantic labels on object classes are given as supervision. We choose ResNet18 (He et al., 2016) as the classification backbone and conduct experiments on CIFAR10 dataset (Krizhevsky et al., 2009). We extract the forward features of each input testing image4 at different network layers, as described in Fig. 3(e)-a.
Fig. 4 shows that as the network deepens, the extracted feature representations produce obvious discriminative clusters, i.e., the learned features are increasingly becoming semantically discriminative. Such discriminative semantics in classification networks are coherent with the artificially predefined labels. This is an intuitive and natural observation, on which lots of representation and discriminative learning methods are based (Wen et al., 2016; Oord et al., 2018; Lee et al., 2019; Wang et al., 2020b).
Further, we add blur and noise degradation to the CIFAR10 test images, and then investigate the feature representations of classification and SR networks. Note that no degradation is added to the training data. As shown in Fig. 5, after adding degradations to the test data, the deep representations obtained by the classification network (ResNet18) are still clustered by object categories, indicating that the features focus more on high-level object class information. On the contrary, the deep representations obtained by SR networks (SRResNet and SRGAN) are clustered with regard to degradation types. The features of the same object category are not clustered together, while those of the same degradation type are clustered together, showing different “semantic” discriminability. This phenomenon intuitively illustrates the differences in the deep semantic representations between SR and classification networks, i.e., degradation-related semantics and content-related semantics. More interestingly, the “semantics” in SR networks exists naturally, because the SR networks only see clean data without any input or labelled degradation information.
4.3 HOW DO GLOBAL RESIDUAL AND ADVERSARIAL LEARNING AFFECT THE DEEP REPRESENTATIONS?
Previously, we have elaborated on the deep degradation representations in CinCGAN, SRGAN and SRResNet. Nevertheless, we further discover that no arbitrary SR network structure has such a property. To be specific, we find two crucial factors that can influence the learned representations: i) image global residual (GR), and ii) generative adversarial learning (GAN).
4For efficiency, we selected 100 testing images of each category (1000 images in total).
Global Residual. We train two SRResNet networks – SRResNet (with global residual) and SRResNet-woGR (without global residual), as shown in Fig. 3. The two architectures are both common in practice (Kim et al., 2016; Shi et al., 2016). DIV2K (Agustsson & Timofte, 2017) dataset is used for training, where the LR images are bicubic-downsampled and clean. Readers can refer to the supplementary file for more details. After testing, the feature visualization analysis is shown in Fig. 6.
The results show that for MSE-based SR method, GR is essential for producing discriminative representations on degradation types. The features in “ResBlock16” of SRResNet have shown distinct discriminability, where the clean, blur, and noise data are clustered separately. On the contrary, SRResNet-woGR shows no discriminability even in deep layers. This phenomenon reveals that GR significantly impacts the learned feature representations. It is inferred that learning the global residual could remove most of the content information and make the network concentrate more on the contained degradation. This claim is also corroborated by visualizing the feature maps in the supplementary file.
Adversarial Learning. MSE-based and GAN-based methods are currently two prevailing trends in CNN-based SR methods. Previous studies only reveal that the output images of MSE-based and GAN-based methods are different, but the differences between their feature representations are rarely discussed. Since their learning mechanisms are quite different, will there be a discrepancy in their deep feature representations? We directly adopt SRResNet and SRResNet-woGR as generators. Consequently, we build two corresponding GAN-based models, namely SRGAN and SRGAN-woGR. After training, we perform the same test and analysis process mentioned earlier.
The results show that the deep features are bound to be discriminative to degradation types for the GAN-based method, whether there is GR or not. As shown in Fig. 7(d)(h), the deep representations in “ResBlock16” of SRGAN-woGR have already been clustered according to different degradation types. This suggests that the learned deep representations of MSE-based method and GAN-based method are dissimilar. Adversarial learning can help the network learn more informative features for distinguishing image degradation rather than image content.
4.4 HOW DOES DDR EVOLVE THROUGH THE TRAINING PROCESS?
We also reveal the relationship between the model performance and DDR discriminability. We select SRResNet models with different training iterations for testing. We report the model performance
on DIV2K-clean validation dataset and calculate the CHI scores to evaluate its discriminability with clean, blur and noise data. As shown in Fig. 8, as the training process goes, the performance of the model is improved, while the feature discriminability for degradation is also enhanced. From random initialization to 700k iterations, the CHI score increases significantly from 0.00 to 591.68, while the PSNR value improves by 2.87dB (Due to GR, the initial PSNR value is relatively high). The training data only include clean LR images, but the trained model has the ability to discriminate unseen degradation types. This clearly implies that a well-trained deep SR network is naturally a good descriptor of degradation information.
4.5 FURTHER DISCUSSION ON THE CAUSES OF DDR PHENOMENON
In the previous sections, we reveal several important factors that promote the manifestation of DDR phenomenon, including global residual, adversarial learning (Sec. 4.3) and training iterations (Sec. 4.4). Based on the above findings and more visualization results, we can analyze the causes of DDR more deeply. We visualize the feature maps of SRResNet-wGR, SRResNet-woGR, SRGAN-wGR and SRGAN-woGR on test images with different degradations in the Appendix.
The DDR phenomenon is mainly introduced by overfitting the degradation in the training data. Specifically, since the training data (DIV2K-clean) do not contain extra degradations, the trained SR network lacks the ability to deal with the unseen degradations. When feeding images with degradations (e.g., noise and blur), it will produce features with unprocessed noises or blurring. These patterned features naturally show a strong discriminability between different degradations. As for GR, models with GR produce features that contain less components of original content information. GR can help remove the redundant image content information and make the network concentrate more on degradation-related information. GAN training also enhances the high-frequency degradation information. Besides, prolonging the training iterations and deepening the network depth will make the network further overfit to the training data.
4.6 WHY SR NETWORKS CAN HARDLY GENERALIZE TO UNSEEN DEGRADATIONS?
Classical SR models (Dong et al., 2014; Lim et al., 2017) assume that the input LR images are generated by fixed downsampling kernel (e.g., bicubic). However, it is difficult to apply such simple SR models to real scenarios with unknown degradations. We claim that SR and restoration networks learn to overfit the distribution of degradations, rather than the distribution of natural clean images.
To verify our statements, we compare the representations between SRGAN-wGR models trained on clean data and clean+noise data, respectively. As presented in Fig. 9, if the model is trained only on clean LR data, the deep representations show strong discriminability to clean and noise data. In contrast, if the model sees noise data during training, such discriminability diminishes. The model will become more robust to more degradation types, as the distributions of the deep representations become unanimous. In summary, to improve the model generalization for various degradations, we need to diminish the feature discriminability to degradations. Adding more degraded data into training is a plausible way to enhance the generalization.
5 APPLICATIONS AND INSPIRATIONS
Image Distortion Identification Using DDR Features. Image distortion identification (Liang et al., 2020) is an important subsidiary pretreatment for many image processing systems, especially for image quality assessment (IQA). It aims to recognize the distortion type from the distorted images, so as to facilitate the downstream tasks (Mittal et al., 2012a; Gu et al., 2019; Liang et al., 2020). Previous methods usually resort to design handcrafted features that can distinguish different degradation types (Mittal et al., 2012a;b) or train a classification model via supervised learning (Kang et al., 2014; Bosse et al., 2017; Liang et al., 2020). Since DDR is related to image degradation, it can naturally be used as an excellent prior feature for image distortion identification. To obtain DDR, we do not need any degradation information but only a well-trained SR model (train on clean data). Following BRISQUE (Mittal et al., 2012a), we adopt the deep representations of SRGAN as input features (using PCA to reduce the original features to a 120-dimensional vector), and then use linear SVM to classify the degradation types of LIVE dataset (Sheikh et al., 2006). As shown in Tab. 1, compared with BRISQUE and MLLNet (Liang et al., 2020), DDR features achieve excellent results on recognizing different distortion types. More inspiringly, DDR is not obtained by any distortion-related supervision.
Blind SR with DDR Guidance. To super-resolve real images with unknown degradations, many blind SR methods resort to estimating and utilising the degradation information. For instance, IKC (Gu et al., 2019) iteratively corrects the estimated blur kernel, and DASR (Wang et al., 2021) implicitly learns the degradation representations by contrastive learning. Based on the findings of DDR, we adopt a trained SRGAN model to extract degradation embedding to promote blind SR models. RRDBNet (Wang et al., 2018) is adopted as the backbone. The DDR embedding is injected into each RRDB module by the StyleMod Karras et al. (2020) (see Fig. 10). The training data are described in Tab. 2, e.g., “b+n” means that the training data include blur and noise images. DDR guidance can help improve the model performance. Fig. 11 reveals that DDR guidance can make the deep features become more homogeneous (CHI scores drop from 14.04 to 4.95).
6 CONCLUSIONS
In this paper, we discover the deep degradation representations in deep SR networks, which are different from high-level vision networks. We demonstrate that a well-trained deep SR network is naturally a good descriptor of degradation information. We reveal the differences in deep representations between classification and SR networks. We draw a series of interesting observations on the intrinsic features of deep SR networks, such as the effects of global residual and adversarial learning. Further, we apply DDR to several fundamental tasks and achieve appealing results. The exploration on DDR is of great significance and inspiration for relevant work.
A APPENDIX
A.1 BACKGROUND
Since the emergence of deep convolutional neural network (CNN), a large number of computer vision tasks have been drastically promoted, including high-level vision tasks such as image classification Russakovsky et al. (2015); Simonyan & Zisserman (2015); He et al. (2016); Huang et al. (2017); Hu et al. (2018), object localization Ren et al. (2015); He et al. (2017); Redmon et al. (2016) and semantic segmentation Long et al. (2015); Badrinarayanan et al. (2017); Chen et al. (2017); Wang et al. (2020a), as well as low-level vision tasks such as image super-resolution Dong et al. (2014); Ledig et al. (2017); Wang et al. (2018); Zhang et al. (2019); Dai et al. (2019), denoising Zhang et al. (2017; 2018a); Gu et al. (2019); Quan et al. (2020), dehazing Cai et al. (2016); Zhang & Patel (2018); Dong et al. (2020); Deng et al. (2020a), etc. However, an interesting phenomenon is that even if we have successfully applied CNNs to many tasks, yet we still do not have a thorough understanding of its intrinsic working mechanism.
To better understand the behaviors of CNN, many efforts have been put in the neural network interpretability for high-level vision Simonyan et al. (2013); Samek et al. (2017); Zeiler & Fergus (2014); Selvaraju et al. (2017); Montavon et al. (2018); Karpathy et al. (2015); Mahendran & Vedaldi (2016); Zhang et al. (2020); Adebayo et al. (2018). Most of them attempt to interpret the CNN decisions by visualization techniques, such as visualizing the intermediate feature maps (or saliency maps and class activation maps) Simonyan et al. (2013); Zeiler & Fergus (2014); Adebayo et al. (2018); Zhou et al. (2016); Selvaraju et al. (2017), computing the class notion images which maximize the class score Simonyan et al. (2013), or projecting feature representations Wen et al. (2016); Wang et al. (2020b); Zhu et al. (2018); Huang et al. (2020). For high-level vision tasks, especially image classification, researchers have established a set of techniques for interpreting deep models and have built up a preliminary understanding of CNN behaviors Gu et al. (2018). One representative work is done by Zeiler et al. Zeiler & Fergus (2014), who reveal the hierarchical nature of CNN by visualizing and interpreting the feature maps: the shallow layers respond to low-level features such as corners, curves and other edge/color conjunctions; the middle layers capture more complex texture combinations; the deeper layers are learned to encode more abstract and class-specific patterns, e.g., faces and legs. These patterns can be well interpreted by human perception and help partially explain the CNN decisions for high-level vision tasks.
As for low-level vision tasks, however, similar research work is absent. The possible reasons are as follows. In high-level vision tasks, there are usually artificially predefined semantic labels/categories. Thus, we can intuitively associate feature representations with these labels. Nevertheless, in low-level vision tasks, there is no explicit predefined semantics, making it hard to map the representations into a domain that the human can make sense of. Further, high-level vision usually performs classification in a discrete target domain with distinct categories, while low-level vision aims to solve a regression problem with continuous output values. Hence, without the guidance of predefined category semantics, it seems not so straightforward to interpret low-level vision networks.
In this paper, we take super-resolution (SR), one of the most representative tasks in low-level vision, as research object. Previously, it is generally thought that the features extracted from the SR network have no specific “semantic” information, and the network simply learns some complex non-linear functions to model the relations between network input and output. Are CNN features SR networks really in lack of any semantics? Can we find any kind of “semantics” in SR networks? In this paper, we aim to give an answer to these questions. We reveal that there are semantics existing in SR networks. We first discover and interpret the “semantics” of deep representations in SR networks. But different from high-level vision networks, such semantics relate to the image degradation types and degrees. Accordingly, we designate the deep semantic representations in SR networks as deep degradation representations (DDR).
A.2 LIMITATIONS
In this paper, we only explore the deep representations of SR networks. Other low-level vision networks are also worth exploring. We apply DDR to three tasks without too elaborate design in the application parts. For blind SR, we make a simple attempt to improve the model performance. The design is not optimal. We believe that there should be a more efficient and effective way to utilize DDR. For generalization evaluation, DDR can only evaluate the model generalization under constrained conditions. It shows the possibility of designing a generalization evaluation metric, but there is still a long way to realize this goal.
A.3 DEEP REPRESENTATIONS OF REAL-WORLD IMAGES
In the main paper, we mainly conduct experiments on synthetic degradations. The difficulty of realworld dataset is that it is hard to keep the content the same but change the degradations. If we simply use two real-world datasets which contains different contents and different degradations, it is hard to say whether the feature discriminability is targeted at image content or at image degradation. Hence, synthetic data at least can control the variables.
In addition, we find a plausible real-world dataset Real-City100, which is proposed in paper Cameral SR. The authors use iPhoneX and NikonD5500 devices to capture controllable images. By adjusting the cameral focal length, each camera captures paired images with the same content but different resolutions. The low-resolution images contain real-world degradations such as real noise and real
blur. We test SRGAN on this dataset and obtain corresponding visualization results, as shown in 12. It can be seen that the deep representations of SRGAN can still distinguish among different degradations across different devices.
A.4 CLASSIFICATION VS. SUPER-RESOLUTION
A.4.1 FORMULATION
Classification. Classification aims to categorize an input image X into a discrete object class:
Ŷ = GCL(X), (1)
where GCL represents the classification network, and Ŷ ∈ RC is the predicted probability vector indicating which of the C categoriesX belongs to. In practice, cross-entropy loss is usually adopted to train the classification network:
CE(Y, Ŷ ) = − C∑ i=1 yilogŷi, (2)
where Y ∈ RC is a one-hot vector representing the ground-truth class label. ŷi is the i-th row element of Ŷ , indicating the predicted probability that X belongs to the i-th class.
Super-resolution. A general image degradation process can be model as follows: X = (Y ⊗ k) ↓s +n, (3)
where Y is the high-resolution (HR) image and ⊗ denotes the convolution operation. X is the degraded high-resolution (LR) image. There are three types of degradation in this model: blur kernel k, downsampling operation ↓s and additive noise n. Hence, super-resolution can be regarded as a superset of other restoration tasks like denoising and deblurring.
Super-resolution (SR) is the inverse problem of Equ. (3). Given the input LR image X ∈ RM×N , the super-resolution network attempts to produce its HR version:
Ŷ = GSR(X), (4)
where GSR represents the super-resolution network, Ŷ ∈ RsM×sN is the predicted HR image and s is the upscaling factor. This procedure can be regarded as a typical regression task. At present, there are two groups of method: MSE-based and GAN-based methods. The former one treats SR as a reconstruction problem, which utilizes pixel-wise loss such as L2 loss to achieve high PSNR values.
L2(Y, Ŷ ) = 1
r2NM rN∑ i=1 rM∑ j=1 ‖Yi,j − Ŷi,j‖22. (5)
This is the most widely used loss function in many image restoration tasks Dong et al. (2014); Lim et al. (2017); Zhang et al. (2018b;a); Cai et al. (2016); He et al. (2020). However, such loss tends to produce over-smoothed images. To generate photo-realistic SR results, the latter method incorporates adversarial learning and perceptual loss to benefit better visual perception. The optimization is expressed as following min-max problem:
min θGSR max θDSR EY∼pHR [logDSR(Y )]
+ EX∼pLR [log(1−DSR(GSR(X)))]. (6)
In such adversarial learning, a discriminatorDSR is introduced to distinguish super-resolved images from real HR images. Then, the generator loss is defined as:
LG = − logDSR(GSR(X)). (7)
From the formulation, we can clearly see that image classification and image super-resolution represent two typical tasks in machine learning: classification and regression. The output of the classification task is discrete, while the output of the regression task is continuous.
A.4.2 ARCHITECTURES
Due to the different output types, the CNN architectures of classification and super-resolution networks also differ. Generally, classification networks often contain multiple downsampling layers (e.g., pooling and strided convolution) to gradually reduce the spatial resolution of feature maps. After several convolutional and downsampling layers, there may be one or more fully-connected layers to aggregate global semantic information and generate a vector containing C elements. For the output layer, the SoftMax operator is frequently used to normalize the previously obtained vector into a probabilistic representation. Some renowned classification network structures include AlexNet Krizhevsky et al. (2012), VGG Simonyan & Zisserman (2015), ResNet He et al. (2016), InceptionNet Szegedy et al. (2015); Ioffe & Szegedy (2015); Szegedy et al. (2017), DenseNet Huang et al. (2017), SENetBadrinarayanan et al. (2017), etc.
Unlike classification networks, super-resolution networks usually do not rely on downsampling layers, but upsampling layers (e.g., bilinear upsampling, transposed convolution Zeiler et al. (2010) or subpixel convolution Shi et al. (2016)). Thus, the spatial resolution of feature maps would increase. Another difference is that the output of the SR network is a three-channel image, rather than an abstract probability vector. The well-known SR network structures include SRCNN Dong et al. (2014), FSRCNN Dong et al. (2016), SRResNet Ledig et al. (2017), RDN Zhang et al. (2018c), RCAN Zhang et al. (2018b), etc. An intuitive comparison of classification and SR networks in CNN architecture is shown in Fig. 18. We can notice that one is gradually downsampling, and the other is gradually upsampling, which displays the discrepancy between high-level vision and low-level vision tasks in structure designing.
Although there are several important architectural differences, classification networks and SR networks can share and adopt some proven effective building modules, like skip connection He et al. (2016); Lim et al. (2017) and attention mechanismHu et al. (2018); Zhang et al. (2018b).
A.5 IMPLEMENTATION DETAILS
In the main paper, we conduct experiments on ResNet18 He et al. (2016) and SRResNet/SRGAN Ledig et al. (2017). We elaborate more details on the network structures and training settings here.
For ResNet18, we directly adopt the network structure depicted in He et al. (2016). Cross-entropy loss (Eq. 2) is used as the loss function. The learning rate is initialized to 0.1 and decreased with a cosine annealing strategy. We apply SGD optimizer with weight decay 5×10−4. The trained model yields an accuracy of 92.86% on CIFAR10 testing set which consists of 10, 000 images.
For SRResNet-wGR/SRResNet-woGR, we stack 16 residual blocks (RB) as shown in Fig. 3 of the main paper. The residual block is the same as depicted in Wang et al. (2018), in which all the BN layers are removed. Two Pixel-shuffle layers Shi et al. (2016) are utilized to conduct upsampling in the network, while the global residual branch is upsampled by bilinear interpolation. L1 loss is adopted as the loss function. The learning rate is initialized to 2 × 10−4 and is halved at [100k, 300k, 500k, 600k] iterations. A total of 600, 000 iterations are executed.
For SRGAN-wGR/SRGAN-woGR, the generator is the same as SRResNet-wGR/SRResNet-woGR. The discriminator is designed as in Ledig et al. (2017). Adversarial loss (Eq. 7) and perceptual loss Johnson et al. (2016) are combined as the loss functions, which are kept the same as in Ledig et al. (2017). The learning rate of both generator and discriminator is initialized to 1×10−4 and is halved at [50k, 100k, 200k, 300k] iterations. A total of 600, 000 iterations are executed. For all the superresolution networks, we apply Adam optimizer Kingma & Ba (2014) with β1 = 0.9 and β2 = 0.99. All the training LR patches are of size 128 × 128. When testing, 32 × 32 patches are fed into the networks to obtain deep features. In practice, we find that the patch size has little effect on revealing the deep degradation representations. All above models are trained on PyTorch platform with GeForce RTX 2080 Ti GPUs.
For the experiment of distortion identification, we use the aforementioned trained models to conduct inferencing on the LIVE dataset Sheikh et al. (2006). We crop the central 96 × 96 patch of each image to feed into the SR networks and obtain the corresponding deep representations. Then, the deep representations of each image are reduced to 120-dimensional vector using PCA. Afterwards, the linear SVM is adopted as the classification tail. In practice, we find that the vector dimension can be even larger for better performance. Notably, unlike previous methods, the features here are not trained on any degradation related labels or signals. The SR networks are only trained using clean data. However, the deep representations can be excellent prior features for recognizing various distortion types. This is of great importance and very encouraging.
A.6 DEFINITIONS OF WD, BD AND CHI
In Sec. 3.1 of the main paper, we describe the adopted analysis method on deep feature representations. Many other literatures also have adopted similar approaches to interpret and visualize the deep models, such as Graph Attention Network Veličković et al. (2017), Recurrent Networks Karpathy et al. (2015), Deep Q-Network Zahavy et al. (2016) and Neural Models in NLP Li et al. (2015). Most aforementioned researches adopt t-SNE as a qualitative analysis technique. To better illustrate and quantitatively measure the semantic discriminability of deep feature representations, we take a step further and introduce several indicators, which are originally used to evaluate the clustering performance, according to the data structure after dimensionality reduction by t-SNE. Specifically, we propose to adopt within-cluster dispersion (WD), between-clusters dispersion (BD) and CalinskiHarabaz Index (CHI) Caliński & Harabasz (1974) to provide some rough yet practicable quantitative measures for reference. For K clusters, WD, BD and CHI are defined as:
WD(K) = K∑ k=1 n(k)∑ i=1 ‖xik − x̄k‖2, (8)
where xik represents the i-th datapoint belonging to class k and x̄k is the average mean of all n(k) datapoints that belong to class k. Datapoints belonging to the same class should be close enough to each other and WD measures the compactness within a cluster.
BD(K) = K∑ k=1 n(k)‖x̄k − x̄‖2, (9)
where x̄ represents the average mean of all datapoints. BD measures the distance between clusters. Intuitively, larger BD value indicates stronger discriminability between different feature clusters. Given K clusters and N datapoints in total (N = ∑ k n(k)), by combining WD and BD, the CHI is formulated as:
CHI(K) = BD(K) WD(K) · (N −K) (K − 1) . (10)
It is represented as the ratio of the between-clusters dispersion mean and the within-cluster dispersion. The CHI score is higher when clusters are dense and well separated, which relates to a standard concept of a cluster.
Rationality of Using Quantitative Measures with t-SNE. Notably, t-SNE is not a numerical technique but a probabilistic one. It minimizes the Kullback-Leibler (KL) divergence between the dis-
tributions that measure pairwise similarities of the input high-dimensional data and that of the corresponding low-dimensional points in the embedding. Further, t-SNE is a non-convex optimization process which is performed using a gradient descent method, as a result of which several optimization parameters need to be chosen, like perplexity, iterations and learning rate. Hence, the reconstruction solutions may differ due to the choice of different optimization parameters and the initial random states. In this paper, we used exactly the same optimization procedure for all experiments. Moreover, we conduct extensive experiments using different parameters and demonstrate that the quality of the optima does not vary much from run to run, which is also emphasized in the t-SNE paper. To make the quantitative analysis more statistically solid, for each projection process, we run t-SNE five times and report the average and standard deviations of every metric.
A.7 FROM SHALLOW TO DEEP SR NETWORKS
In the main paper, we reveal that a shallow 3-layer SRCNN Dong et al. (2014) does not manifest representational discriminability on degradation types. Thus, we hypothesize that only deep SR networks possess such degradation-related semantics. To verify the statement, we gradually deepen the depth of SRCNN and observe how its deep representations change. We construct SRCNN models with different layer depths from shallow 3 layers to 13 layers. We train these models on DIV2Kclean data (inputs are only downsampled without other degradations) and test them on classical SR benchmarks. As shown in Tab. 4, the model achieves better SR performance with the increase of network depth, suggesting that deeper networks and more parameters can lead to greater learning capacity. On the other hand, the deep representations also gradually manifest discriminability on degradation types, as depicted in Fig. 14. When the model only has 3 layers, its representations cannot distinguish different degradation types. However, when we increase the depth to 13 layers, the deep representations begin to show discriminability on degradation types, with the CHI score increasing to 168.12.
A.8 MORE APPLICATIONS
Evaluating the Generalization Ability. According to the discussions in Sec. 4.6, DDR can be used as an approximate evaluation metric for generalization ability. Specifically, given a trained model and several test datasets with different degradations, we can obtain their DDR features. By
evaluating the discriminability of the projection results (clustering effect), we can roughly measure the generalization performance over different degradation types. The worse the clustering effect, the better the generalizability. Fig .11 shows the DDR clustering of different models. RRDB (clean) is unable to deal with degraded data and obtains lower PSNR values on blur and noise inputs. Its CHI score is 322.16. By introducing degraded data into training, the model gains better generalization and the CHI score is 14.04. With DDR guidance, the generalization ability is further enhanced. The CHI score decreases to 4.95. The results are consistent with the results in the previous section. Interestingly, we do not need ground-truth images to evaluate the model generalization. A similar attempt has been made in recent work Liu et al. (2022). Note that CHI is only a rough index, which cannot accurately measure the minor differences. DDR shows the possibility of designing a generalization evaluation metric, but there is still a long way to realize this goal.
A.9 EXPLORATION ON DIFFERENT DEGRADATION DEGREES
Previously, we introduce deep degradation representations by showing that the deep representations of SR networks are discriminative to different degradation types (e.g., clean, blur and noise). How about the same degradation type but with different degraded degrees? Will the deep representa-
tions still be discriminative to them? To explore this question, more experiments and analysis are performed.
We test super-resolution networks on degraded images with different noise degrees and blur degrees. The results are depicted in Table. 7 and Fig. 17. It can be seen that the deep degradation representations are discriminative not only to cross-degradation (different degradation types) but also to intra-degradation (same degradation type but with different degrees). This suggests that even for the same type of degradation, different degradation degrees will also cause significant differences in features. The greater the difference between degradation degrees, the stronger the discriminability of feature representations. This also reflects another difference between the representation semantics of super-resolution network and classification network. For classification, the semantic discriminability of feature representations is generally discrete, because the semantics are associated with discrete object categories. Nevertheless, there appears to be a spectrum (continuous transition) for the discriminability of the deep degradation representations, i.e., the discriminability has a monotonic relationship with the divergence between degradation types and degrees. For example, the degradation difference between noise levels 10 and 20 is not that much distinct, and the discriminability of feature representations is relatively smaller, comparing with noise levels 10 and 30.
From Table 7, there are notable observations. 1) Comparing with blur degradation, noise degradation is easier to be discriminated. Yet, it is difficult to obtain deep representations that have strong discriminability for different blur levels. Even for GAN-based method, global residual (GR) is indispensable to obtain representations that can be discriminative to different blur levels. 2) The representations obtained by GAN-based method have more discriminative semantics to degradation types and degrees than those of MSE-based method. 3) Again, global residual can strengthen the representation discriminability for degradations.
A.10 EXPLORATION OF NETWORK STRUCTURE
In the main paper, we choose ResNet18 He et al. (2016) and SRResNet/SRGAN Ledig et al. (2017) as the backbones of classification and SR networks, respectively. In order to eliminate the influence of different network structures, we design a unified backbone framework, which is composed of the
same basic building modules but connected with different tails for downsampling and upsampling to conduct classification and super-resolution respectively.
The unified architecture is shown in Fig. 18. To differ from the residual block in the main paper, we adopt residual channel attention layer as basic building block, which is inspired by SENet Hu et al. (2018) and RCAN Zhang et al. (2018b). For classification, the network tail consists of three maxpooling layers and a fully connected layer; for super-resolution, the network tail consists of two pixel-shuffle layers to upsample the feature maps. According to the conclusions in the main paper, we adopt global residual (GR) in the network design to obtain deep degradation representations (DDR). Except the network structure, all the training protocols are kept the same as in the main paper. The training details are the same as depicted in Sec. A.5. After training, the unified backbone framework for classification yields an accuracy of 92.08% on CIFAR10 testing set.
The experimental results are shown in Fig. 19, Fig. 20 and Tab. 8. From the results, we can see that the observations are consistent with the findings in the main paper. It suggests that the semantic representations do not stem from network structures, but from the task itself. Hence, our findings are not only limited to specific structures but are universal.
A.11 MORE INSPIRATIONS AND FUTURE WORK
Disentanglement of Image Content and Degradation In plenty of image editing and synthesizing tasks, researchers seek to disentangle an image through different attributes, so that the image can be finely edited Karras et al. (2019); Ma et al. (2018); Deng et al. (2020b); Lee et al. (2018); Nitzan et al. (2020). For example, semantic face editing Shen et al. (2020a;b); Shen & Zhou (2020) aims at manipulating facial attributes of a given image, e.g., pose, gender, age, smile, etc. Most methods attempt to learn disentangled representations and to control the facial attributes by manipulating the latent space. In low-level vision, the deep degradation representations can make it possible to decompose an image into content and degradation information, which can promote a number of new areas, such as degradation transferring and degradation editing. Further, more in-depth research on deep degradation representations will also greatly improve our understanding of the nature of images.
A.12 DISCUSSIONS ON DIMENSIONALITY REDUCTION
Among the numerous dimensionality reduction techniques (e.g., PCA Hotelling (1933), CCA Demartines & Hérault (1997), LLE Roweis & Saul (2000), IsomapTenenbaum et al. (2000), SNEHinton & Roweis (2002)), t-Distributed Stochastic Neighbor Embedding (t-SNE) Van der Maaten & Hinton (2008) is a widely-used and effective algorithm. It can greatly capture the local structure of the high-dimensional data and simultaneously reveal global structure such as the presence of clusters at several scales. Following Donahue et al. (2014); Mnih et al. (2015); Wen et al. (2016); Zahavy et al. (2016); Veličković et al. (2017); Wang et al. (2020b); Huang et al. (2020), we also take advantage of the superior manifold learning capability of t-SNE for feature projection.
In this section we further explain the effectiveness of adopting t-SNE and why we choose to project hign-dimensional features into two-dimensional datapoints. We first compare the projection results of PCA and t-SNE. From the results shown in Fig. 21, it can be observed that the projected features by t-SNE are successfully clustered together according the semantic labels, while the projected features by PCA are not well separated. It is because that PCA is a linear dimension reduction method which cannot deal with complex non-linear data obtained by the neural networks. Thus, t-SNE is a better choice to conduct dimension reduction on CNN features. This suggests the effectiveness of t-SNE for the purpose of feature projection. Note that we do not claim t-SNE is the optimal or the best choice for dimensionality reduction. We just utilize t-SNE as a rational tool to show the trend behind deep representations, since t-SNE has been proven effective and practical in our experiments and other literatures.
Then, we discuss the dimensions to reduce. We conduct dimensionality reduction to different dimensions. Since the highest dimension supported by t-SNE is 3, we first compare the effect between the two-dimensional projected features and the three-dimensional projected features by t-SNE. The qualitative and quantitative results are shown in Fig. 21 and Tab. 9. When we reduce the features to three dimensions, the reduced representations also show discriminability to semantic labels. How-
ever, quantitative results show that two dimensions can better portray the discriminability than three or higher dimensions. For PCA, the results are similar. With higher dimensions, the discriminability decrease. Hence, it is reasonable to reduce high-dimensional features into two-dimensional datapoints. Such settings are also adopted in Donahue et al. (2014); Wang et al. (2020b); Veličković et al. (2017); Huang et al. (2020), which are proven effective.
A.13 VISUALIZATION OF FEATURE MAPS
So far, we have successfully revealed the degradation-related semantics in SR networks with dimensionality reduction. In this section, we directly visualize the deep feature maps extracted from SR networks to provide some intuitive and qualitative interpretations. Specifically, we extract the feature maps obtained from four models (SRResNet-wGR, SRResNet-woGR, SRGAN-wGR and SRGAN-woGR) on images with different degradations (clean, blur4, noise20), respectively. Then we treat each feature map as a one channel image and plot it. The visualized feature maps are shown in Fig. 22. We select 8 feature maps with the largest eigenvalues for display. The complete results are shown in the supplementary file.
Influence of degradations on feature maps. From Fig. 22(a), we can observe that the deep features obtained by SRResNet-woGR portray various characteristics of the input image, including edges, textures and contents. In particular, we highlight in “red rectangles” the features that retain most of the image content. As shown in Fig. 22(b), after applying blur and noise degradations to the input image, the extracted features appear similar degradations as well. For blurred/noisy input images, the extracted feature maps also contain homologous blur/noise degradations.
Effect of global residual. In Sec. 4.3, we have revealed the importance and effectiveness of global residual (GR) for obtaining deep degradation representations for SR networks. But why GR is so
important? What is the role of GR? Through visualization, we can provide a qualitative and intuitive explanation here. Comparing Fig. 22(a) and Fig. 22(b), it can be observed that by adopting GR, the extracted features seem to contain less components of original shape and content information. Thus, GR can help remove the redundant image content information and make the network concentrate more on obtaining features that are related to low-level degradation information.
Effect of GAN. Previously, we have discussed the difference between MSE-based and GAN-based SR methods in their deep representations. We find that GAN-based method can better obtain feature representations that are discriminative to different degradation types. As shown in Fig. 22(a) and Fig. 22(c), the feature maps extracted by GAN-based method contain less object shape and content information compared with MSE-based method. This partially explains why the deep representations of GAN-based method are more discriminative, even without global residual. Comparing Fig. 22(c) and Fig. 22(d), when there is global residual, the feature maps containing the image original content information are further reduced, leading to stronger discriminability to degradation types.
A.14 SAMPLES OF DIFFERENT DATASETS
In the main paper, we adopt several different datasets to conduct experiments. Fig. 23 displays some example images from these datasets.
(a) DIV2K-clean: the original DIV2K Agustsson & Timofte (2017) dataset. The high-resolution (HR) ground-truth (GT) images have 2K resolution and are of high visual quality. The lowresolution (LR) input images are downsampled from HR by bicubic interpolation, without any further degradations.
(b) DIV2K-noise: adding Gaussian noises to DIV2K-clean LR input, thus making it contain extra noise degradation. DIV2K-noise20 means the additive Gaussian noise level σ is 20, where the number denotes the noise level.
(c) DIV2K-blur: applying Gaussian blur to DIV2K-clean LR input, thus making it contain extra blur degradation. DIV2K-blur4 means the Gaussian blur width is 4.
(d) DIV2K-mild: officially synthesized from DIV2K Agustsson & Timofte (2017) dataset as challenge dataset Timofte et al. (2017; 2018), which contains noise, blur, pixel shifting and other degradations. The degradation modelling is unknown to challenge participants.
(e) Hollywood100: 100 images selected from Hollywood dataset Laptev et al. (2008), containing real-world old film frames with unknown degradations, which may have compression, noise, blur and other real-world degradations.
Dataset (a), (b), (c) and (d) have the same image contents but different degradations. However, we find that the deep degradation representations (DDR) obtained by SR networks have discriminability to these degradation types, even if the network has not seen these degradations at all during training. Further, for real-world degradation like in (e), the DDR are still able to discern it. | 1. What is the main contribution of the paper regarding semantic representation in the SR network?
2. What are the strengths and weaknesses of the proposed approach, particularly in its application to various low-level vision tasks?
3. Do you have any concerns about the experiments conducted in the paper, such as the choice of degradations and the use of diverse upsampling factors?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper analyzes semantics learned in the SR network, and finds that a well trained SR network can naturally extract degradation descriptors. Authors find that conventional or shallow CNN models have difficulty in separating degradation-related semantics. Especially, global residual and adversarial learnings make the SR network extract degradation-related representation. Then, authors exploit their findings for other vision tasks.
Strengths And Weaknesses
Strength
In this paper, interesting findings are introduced such as degradation-related semantics.
The proposed findings can be adopted for various low-level vision tasks.
It is interesting that we need to diminish the feature discriminability to improve the model generalization for various degradations.
Weaknesses
Although the observation and finding of this paper are interesting, the finding of this paper should be verified by real applications. Although applied to distortion identification and blind SR, it is somewhat insufficient. Comparative analysis with existing blind SR techniques such as IKC is required, and experiments on other degradations such as compressing rather than noise and blur are also required.
I think that this paper mainly focuses on the SR. However, in the motivation section, experiments for observations are biased to image denoising (or restoration). Also, compared to the traditional method is BM3D which is a representative denoising method. When the reader first sees the paper, it seems strange. Figure 2 should be an experiment on SR.
Figure 2 does not show observations for various degradations. Not only noise, but also degradation such as compression, irregular holes, and color tones should be dealt with as well.
Figure 5~8 seems to be convincing only if more diverse degradations. In addition, it is necessary to verify whether there is a different phenomenon that varies depending on the upsampling factor of the SR network.
Minor points
[1pp] sparse coding(Yang ) → sparse coding (Yang )
[3pp] Wang et al. Wang et al. (2020b) → Wang et al. (Wang et al. (2020b))
In Figure 2, the results in the last column are from SRCNN? Text above the figure denotes it as SRCNN but texts in the caption denotes as CinCGAN (Yuan et al., 2018) is trained on DIV2K-mild dataset in an unpaired manner.
Clarity, Quality, Novelty And Reproducibility
This paper is quite clear and has a novel point.
Detailed Implementation information is somewhat lacking, but overall it is judged that it can be reproduced.
It has not been verified whether the performance of the real SOTA blind SR model can be improved or whether there are other practical applications. Experiments on distortion identification and blind SR presented in the paper only show that it is applicable to them. It should be demonstrated experimentally whether it can effectively improve the existing distortion identification and blind SR methods. |
ICLR | Title
Marginal Deep Architectures: Deep learning for Small and Middle Scale Applications
Abstract
In recent years, many deep architectures have been proposed in different fields. However, to obtain good results, most of the previous deep models need a large number of training data. In this paper, for small and middle scale applications, we propose a novel deep learning framework based on stacked feature learning models. Particularly, we stack marginal Fisher analysis (MFA) layer by layer for the initialization of the deep architecture and call it “Marginal Deep Architectures” (MDA). In the implementation of MDA, the weight matrices of MFA are first learned layer by layer, and then we exploit some deep learning techniques, such as back propagation, dropout and denoising to fine tune the network. To evaluate the effectiveness of MDA, we have compared it with some feature learning methods and deep learning models on 7 small and middle scale real-world applications, including handwritten digits recognition, speech recognition, historical document understanding, image classification, action recognition and so on. Extensive experiments demonstrate that MDA performs not only better than shallow feature learning models, but also state-of-the-art deep learning models in these applications.
1 INTRODUCTION
Deep learning methods have achieved desirable performance in many domains, such as image classification and detection, document analysis and recognition, natural language processing, video analysis (Krizhevsky et al., 2012; Chan et al., 2014; Ciresan et al., 2010; Collobert & Weston, 2008; Le et al., 2011). Deep learning methods learn the data representation by using multiple processing layers, which discover the intricate structure of high dimensional data with multiple levels of abstraction (LeCun et al., 2015). For example, for face recognition, the learned features of first layer may be the edges, directions and some local information. The second layer typically detects some object parts which are combination of edges and directions. The higher layers may further abstract the face image by combining the features of previous layers (outline of the eyes, nose, lips). This procedure is very similar with human visual and perceptual system.
In recently years, many deep learning methods have been proposed (l. Boureau & others, 2008; Lee et al., 2009b;a; Hinton & Salakhutdinov, 2006). However, most models meet some difficult problems to solve, such as some parameters need to be randomly initialized, like the weight matrix of two successive layers in deep belief networks (DBNs) and the convolution kernel in convolutional neural networks (CNNs). In addition, traditional deep learning methods need a large scale training data to train the complex networks. It causes many problems in the training process. If we don’t initialize the parameters properly, the optimization procedure might need a long training time and fall into local minima. Alternatively, many feature learning models have been proposed to learn the intrinsic structures of high-dimensional data and avoid the curse of dimensionality. In particular, most of them can be trained with small and middle scale of data and their learning algorithms are generally based on closed-form solution or convex optimization. For instance, marginal Fisher analysis (MFA) (Yan et al., 2007; Zhong et al., 2013) is one of the feature learning models that is a supervised method based on the graph embedding framework. It utilizes an intrinsic graph to characterize the intraclass compactness, and another penalty graph to characterize the interclass separability. Its optimal solution can be learned by generalized eigenvalue decomposition. However,
on the one hand, shallow feature learning models cannot work well on the data with highly nonlinear structure; on the other hand, few efforts are made to combine shallow feature learning models for the design of deep architectures.
In order to simultaneously solve the existing problems in deep learning methods and combine the advantages of feature learning models, we proposed a novel deep learning method based on stacked feature learning models. Particularly, we stack marginal Fisher analysis (MFA) layer by layer for the initialization of the deep architecture and call it “Marginal Deep Architectures” (MDA). Firstly, the input data are mapped to higher dimensional space by using random weight matrix. Then we use MFA to learn the lower dimensional representation layer by layer. In the implementation of this architecture, we add some tricks in the training process, such as back propagation, dropout and denoising to fine tune the network. Finally, the softmax layer is connected to the last feature layer. We have compared our MDA with some feature learning methods and deep learning models on different domains of datasets (including handwritten digits recognition, speech recognition, historical document understanding, image classification, action recognition and so on). Extensive experiments demonstrate that MDA performs not only better than shallow feature learning models, but also state-of-the-art deep learning models in small and middle scale applications.
The contributions of this work are highlighted as follows.
1. We propose a novel structure to build a deep architecture. The first hidden layer has twice or quadruple neurons as the input layer. Then we can use some feature learning models layer by layer to learn the compact representations of data. Finally, we set the last layer as a softmax classifier.
2. Traditional deep learning models in general need a large scale training data. Compared with traditional deep learning models, MDA can work better than traditional deep learning models in small and middle scale applications because the initialization of the weight matrices using MFA is much better than that using random initialization.
3. Our MDA can work well in different domains of datasets, such as handwritten digits, spoken letters and natural images. Extensive experiments demonstrate that MDA is a general model to handel small and middle scale data. On the other hand, for large scale datasets, like CIFAR-10, MDA works comparatively with other deep learning methods.
The rest of this paper is organized as follows: In Section 2, we give a brief overview of related work. In Section 3, we present the marginal Fisher analysis (MFA) and the proposed marginal deep architectures (MDA) in detail. The experimental settings and results are reported in Section 4, while Section 5 concludes this paper with remarks and future work.
2 RELATED WORK
With the development of deep learning methods, many deep networks have been proposed in recent years (Donahue et al., 2013; Krizhevsky et al., 2012; Long et al., 2015; Zhou et al., 2014). These deep learning models show their powerful performance in various fields, such as image classification and analysis, document analysis and recognition, natural language processing et al. In the area of image analysis, Hinton et al. proposed a large, deep convolutional neural network (Alex net) to classify the 1.2 million high-resolution images in the ImageNet. It uses efficient GPU to speed their method. The results show that a large, deep convolutional neural network is capable of achieving recordbreaking results on a highly challenging dataset using purely supervised learning (Krizhevsky et al., 2012). In order to popularize the deep convolutional neural network, Donahue ea al. proposed DeCAF (Deep Convolutional Activation Feature) which is trained in a fully supervised fashion on a large, fixed set of object recognition tasks (Donahue et al., 2013). DeCAF provides a uniform framework for researchers who can improve and change this framework on some specific tasks. However, its performance at scene recognition has not attained the same level of success. In order to handle this problem, Zhou et al. introduce a new scene-centric database called Places with over 7 million labeled pictures of scenes. Then, they learn the deep features for scene recognition tasks by using the same architecture as ImageNet, and establish new state-of-the-art results on several scenecentric datasets (Zhou et al., 2014). However, these methods based on convolutional operation need very large scale training samples and a long training time. They can not work well on small and middle scale applications.
In other domains, deep learning methods also achieve good performance. Hinton et al. represent the shared views of four research groups that have had recent successes in using DNNs for automatic speech recognition (ASR). The DNNs that contain many layers of nonlinear hidden units and a very large output layer can outperform Gaussian mixture models (GMMs) at acoustic modeling for speech recognition on a variety of data sets (Hinton et al., 2012a). In the area of genetics, Xiong et al. use “deep learning” computer algorithms to derive a computational model that takes as input DNA sequences and applies general rules to predict splicing in human tissues (Xiong et al., 2015). It reveals the genetic origins of disease and how strongly genetic variants affect RNA splicing. In the area of natural language understanding, deep learning models have delivered strong results on topic classification, sentiment analysis et al. Sutskever et al. proposed a general approach, the Long Short-Term Memory (LSTM) architecture which can solve the general sequence to sequence problems better than before (Sutskever et al., 2014). In addition, Hinton et al. proposed autoencoder (AE) networks that is an effective way to learn the low-dimensional codes of high-dimensional data. Based on autoencoder, there are also have many excellent works to handle various tasks. Vincent et al. proposed a denoising autoencoder (DAE) which maked the learned representations robust to partial corruption of the input data (Vincent et al., 2008). The denoising autoencoder which initialize the deep architectures layer by layer is very similar with human visual system. Hinton et al. introduced random ‘dropout’ to prevent the overfitting which improve many benchmark tasks and obtain new records for speech and object recognition (Hinton et al., 2012b). Then, Vincent et al. proposed stacked denoising autoencoders (SDAE) which based on stacking layers of stacked denoising autoencoders (Vincent et al., 2010). It is very useful to learn the higher level representations and work well on natural images and handwritten digits. However, for the same reason, they also need a large scale training set and a long training time. They have no advantages to handle the small and middle scale applications.
Moreover, in the field of feature learning models, dimensionality reduction plays a crucial role to handle the problems for compressing, visualizing high-dimensional data and avoiding the “curse of dimensionality” (van der Maaten et al., 2009; van der Maaten, 2007). Traditional dimensionality reduction mainly can be classified into three types: linear or nonlinear, like principal components analysis (PCA) (Jolliffe, 2002) and linearity preserving projection (LPP) (Niyogi, 2004) are linear methods, stochastic neighbor embedding (SNE) (Hinton & Roweis, 2002) is a nonlinear method; supervised or unsupervised, such as marginal Fisher analysis (MFA) (Yan et al., 2007; Zhong et al., 2013) and linear discriminant analysis (LDA) (Fisher, 1936) are supervised methods, PCA is an unsupervised method; local or global, like MFA and SNE are local methods, PCA is a global method. Many feature learning models based on geometry theory provide different solutions to the problem of dimensionality reduction. Yan et al. proposed a general formulation about graph embedding framework can exploit new dimensionality reduction algorithms (Yan et al., 2007). If only directly use some feature learning models to extract the good representation from original data, it often eventually couldn’t get a good outcome. Considering this situation, we try to choose some excellent feature learning models and combine them with some deep learning algorithms. MFA is one special formulation of the graph embedding models based on this framework. It utilizes an intrinsic graph to characterize the intraclass compactness, and another penalty graph to characterize the interclass separability. Our motivation is to combine the advantage of MFA and deep architectures and propose a new initialization method for deep learning algorithms.
There are also have some excellent works about feature learning models combined the deep architectures (Yuan et al.; George et al., 2014; Ngiam et al., 2011). Yuan et al. proposed an improved multilayer learning model to solve the scene recognition task (Yuan et al.). This model overcome the limitation of shallow, one-layer representations for scene recognition. Trigeorgis et al proposed deep Semi-NMF, that is able to learn such hidden representations from different, unknown attributes of a given dataset (George et al., 2014). Ngiam proposed a deep architectures to learn features over multiple modalities (Ngiam et al., 2011). They showed that multi-modality feature learning is better than one modality and achieved good performance on video and audio datasets. However, in general, we can only obtain data from one modality. In this work, we combine the advantages of MFA and deep architectures, which based on stacked feature learning models (Zheng et al., 2014; 2015), then we use some deep learning tricks, like back propagation, denoising and dropout to fine tuning the network. The advantage of this deep architecture is that we can learn the desirable weight matrix even if the training data is not large enough. And compared with traditional deep learning models and shallow feature learning models, our MDA achieved state-of-the-art results in most cases.
3 MARGINAL DEEP ARCHITECTURES (MDA)
In this section, we firstly introduce a novel framework of deep architectures, then we introduce marginal Fisher analysis (MFA) and the proposed marginal deep architectures (MDA) in detail. In addition, we also present some deep learning tricks that we used in the MDA model, including back propagation, denoising and dropout.
3.1 A NOVEL FRAMEWORK OF DEEP ARCHITECTURES
The feature learning problem is generally formulated as follow. Given n data, {xT1 , . . . ,xTn} ∈ <D, where D is the dimensionality of the data space, we seeks the compact representations of these data, i.e., {yT1 , . . . ,yTn } ∈ <d, where d is the dimensionality of the low dimensional embeddings. In order to improve the accuracy of shallow feature learning models, we use stacked feature learning models to construct the deep architectures (Zheng et al., 2014; 2015), which is a general framework for different applications. In this case, the mapping of data from the original D-dimensional space to the resulted d-dimensional space can be described as
D =⇒ D1 =⇒ · · · =⇒ Di =⇒ · · · =⇒ Dp−1 =⇒ d, (1) where D1 is the first higher dimensional space, the number of the node is twice or quadruple as the input layer. Di represents the dimensionality of the i-th intermediate representation space, and p is the total steps of mappings. Here, we can use different feature learning models for the learning of each layer. As the feature learning models are optimized layer by layer, we can obtain the mapping functions between successive layers. The first hidden layer is random by Wr1, and the representation is, a1 = g(WTr1x+ b) (2) where, g(.) is a non-linear activation or transfer function. Then, we can use some feature learning models to initialize the next layers. The representations of next hidden layers are,
ak = g(WTFk−1a k−1 + b) (3)
where, WFk−1 is the weight matrix of the k − 1th layer learned from feature learning models.
3.2 MARGINAL FISHER ANALYSIS (MFA)
Based on our novel framework of deep architecture, we introduce Marginal Fisher Analysis (MFA) to build MDA. Here, many traditional feature learning models, such as linear discriminant analysis
(LDA), can be used as building blocks of MDA. Take LDA as an example. It assumes that the data of each class follow a Gaussian distribution. However, this assumption is not often satisfied in the real world. Without this assumption, LDA can not work well to separate the data with nonlinear structure. Alternatively, MFA can solve this problem effectively. Hence, considering the learning capability, we choose MFA as the build blocks of MDA in our work. MFA used the graph embedding framework to set up an intrinsic graph that characterizes the intraclass compactness and another penalty graph which characterizes the interclass separability. The marginal Fisher criterion is defined as
W∗ = argmin W tr(WTX(D−A)XTW) tr(WTX(Dp −Ap)XTW)
(4)
where D and Dp are diagonal matrices with elements Dii = ∑ j Aij , and D p ij = ∑ j A p ij , respectively. Then we can learn the projection matrix to multiply PCA’s projection and marginal Fisher projection,
WMFA = WPCAW ∗ (5)
3.3 MARGINAL DEEP ARCHITECTURES (MDA)
In order to combine the advantages of MFA and proposed deep architectures, we propose the marginal deep architectures (or MDA). The MDA inherited from the proposed novel framework of deep architectures is shown in Fig. 1. As an input vector x ∈ [0, 1]d, we first map it to higher dimensional space by a random weight matrix Wr1. The representation of first hidden layer is computed as
a1 = s(WTr1x+ b) (6)
where, s(.) is the sigmoid function s(x) = 11+e−x , b is the bias terms, a 1 is the output of first layer. From second layer to (n − 1)-th layer, we use the weight matrices learned from MFA to map layer by layer.
ak = s(WTMFAk−1a k−1 + b) (7)
The last layer is a softmax regression layer and the number of neuron is the number of category. The cost function is defined as,
J(w) = − 1 N ( N∑ i=1 K∑ j=1 I(yi = j) log exp(wTj a n−1 i )∑K l=1 exp(w T l a n−1 i ) ) (8)
where, I(x) is the indicator function, I(x) = 1 if x is true, else I(x) = 0. yi is the label corresponding to xi. Then the probability that xi is classified to j is,
p(yi = j|xi,w) = exp(wTj a n−1 i )∑K
l=1 exp(w T l a n−1 i )
(9)
Taking derivatives, one can show that the gradient is,
∇J(w) = − 1 N N∑ i=1 [xi(I(yi = j)− p(yi = j|xi,w)] (10)
If the n−1 layer’s neurons are more than the last layer, we can continue using MFA to map it. On the contrary, If the n − 1 layer’s neurons are less than last layer, we can randomly initialize the weight matrix between this two layers. Next, in order to improve the MDA, we introduce back propagation, denoising and dropout operation.
3.4 BACK PROPAGATION
In order to adjust the network, we use back propagation (Rumelhart et al., 1986) to compute partial derivative and stochastic gradient descent to update the weight matrixes and the bias terms. For each node i in output layer (n-th layer), we compute an error term as
δni = ∇J(w) (11)
where, J(w) is the cost function computed from Equ.8 and ∇J(w) computed from Equ.10. For each node i in (n− 1)-th to second layer, the error term is computed as,
δki = ( k+1∑ j=1 wkjiδ k+1 j )s ′(zki ) (12)
The back propagation procedure relies on computing the gradient of an objective function with respect to the weights of a multilayer stacked modules. It starting from the output at the top and end to the input at the bottom.
3.5 DENOISING OPERATION
Vincent et al. proposed the denoising autoencoder to improve the robustness of autoencoder (Vincent et al., 2008). It’s very similar with the regularization methods and avoids the “overfitting” problem. The basic idea is to corrupt partial input data by the desired proportion of ν “destruction”. for each input x, a fixed number νd of components are chosen at random, and their value is forced to 0, while the others are left untouched. The initial input x to get a partially destroyed version x̃ by means of a stochastic mapping, x̃ ∼ qD(x̃|x) (13) where, qD(x̃|x) is the unknown distribution. Then, for a hidden representation h,
h = s(WT x̃+ b) (14)
In our MDA, we use this idea to improve the network, please refer to Fig. 1 to find clear sight. For the input layer, the output of first hidden layer is represented as,
a2 = s(WTr1 x̃+ b1) (15)
where, Wr1 is the first layer random weight matrix, b1 is the bias term of first layer. The “denoising” operation is established to a hypothetical additional specific criterion: robustness to partial destruction of the input, which means a good intermediate representation is learned from unknown distribution of its observed input. This operation helps for learning more stable structure and avoids the overfitting problem in most cases.
3.6 DROPOUT
As the same reason with denoising operation, dropout is a trick to prevent overfitting (Hinton et al., 2012b). When a large feedforward neural network is trained on a small training set, dropout performed well on test set. In order to prevent the complex co-adaptations on the training data, the basic idea of dropout is that each hidden node is randomly omitted from the network with a probability of β, so a hidden node can’t rely on other hidden node. In another view, dropout is as a very efficient way of performing model averaging with neural networks. On test set, we train many separate networks and then to apply each of these networks to the test data. Dropout operation can save the train time and then we average the predictions produced by a very large number of different networks. Fig. 1 shows the dropout operation in our MDA.
4 EXPERIMENTS
4.1 DATESET DESCRIPTIONS
We evaluate the performance of MDA on five benchmark data sets. The detail of the data is showed in Tab 1. The USPS 1 data set is a handwritten digits image data set includes 7291 training samples and 2007 test samples from 10 classes with 256 dimensional features. This task is to recognize the digits 0 to 9. The Isolet 2 data set is a collection of audio feature vectors of spoken letters from the English alphabet. It includes 6238 training samples and 1559 test samples from 26 classes with 614 dimensional features. The task is to identify which letter is spoken based on the recorded
1http://www.gaussianprocess.org/gpml/data/ 2http://archive.ics.uci.edu/ml/datasets/ISOLET
(and pre-processed) audio signal. Sensor 3 is a sensorless drive diagnosis data set includes 46816 training samples and 11693 test samples from 11 classes with 48 dimensional features. The features are extracted from electric current drive signals. The task is to classify 11 different classes with different conditions of the drive which has intact and defective components. Covertype 4 contains geological and map-based data from four wilderness areas located in the Roosevelt National Forest of northern Colorado. It includes 15120 training samples and 565892 test samples from 7 classes with 54 dimensional features. The task is to identify forest cover type from cartographic variables. For the IbnSina 5 ancient Arabic document data set, we use 50 pages of the manuscript for training (17543 training samples) and 10 pages for testing (3125 test samples). The data samples belong to 174 classes of subwords and are of dimensionality 200.
In addition, we also use a large scale dataset CIFAR-10 6 to test our MDA on large scale applications. The CIFAR-10 dataset consists of 60000 32 × 32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. We also test our MDA on a specific task which use the CMU motion capture (CMU mocap) data set 7. The CMU mocap data set includes three categories, namely, jumping, running and walking. We choose 49 video sequences from four subjects. For each sequence, the features are generated using Lawrences method 8, with dimensionality 93 (Zhong et al., 2010). By reason of the few samples of CMU, we adopt 10-fold cross-validation in our experiments and use the average error rate and standard deviation to evaluate the performance.
4.2 CLASSIFICATION ON FIVE BENCHMARK DATA SETS
4.2.1 BASELINE METHODS
In order to evaluate the performance of MDA, we compared our MDA with 5 deep learning models include autoencoder (AE) (Hinton & Salakhutdinov, 2006), stacked autoencoders, denoising autoencoders (Vincent et al., 2008), stacked denoising autoencoders (Vincent et al., 2010) and stacked denoising autoencoders with dropout, 2 feature learning models, MFA (Zhong et al., 2013; Yan et al., 2007) and PCA (Jolliffe, 2002), PCA deep architecture base on our uniform framework and the classification accuracy on original space.
4.2.2 EXPERIMENTAL SETTINGS
All of the deep learning methods have the same settings. The size of minibatch was set to 100, the learning rate and momentum were the default value 1 and 0.5, the number of epoch was set to 400, the dropout rate and denoising rate ν were set to 0.1. For the AE and SAE, weight penalty of the L2 norm was set to 10−4. For MFA, the number of nearest neighbors for constructing the intrinsic graph was set to 5, while that for constructing the penalty graph was set to 20. The target spaces of MFA and PCA on different data sets were showed in Tab 1. For the USPS data set, The architecture was set to 256 − 512 − 256 − 128 − 64 − 32. For the Isolet data set ,the architecture was set to 617 − 1324 − 617 − 308. For the Sensor data set, the architecture was set to 48 − 96 − 48 − 24.
3http://archive.ics.uci.edu/ml/datasets/Dataset+for+Sensorless+Drive+Diagnosis# 4http://archive.ics.uci.edu/ml/datasets/Covertype 5http://www.causality.inf.ethz.ch/al data/IBN SINA.html 6http://www.cs.toronto.edu/ kriz/cifar.html 7http://http://mocap.cs.cmu.edu/ 8http://is6.cs.man.ac.uk/∼neill/mocap/
For the Covertype data set, we set the architecture to 54− 216− 108− 54− 27. Finally, for Ibnsina data set, the architecture was set to 200− 400− 200− 100.
4.2.3 CLASSIFICATION RESULTS
The experimental results are shown in Tab. 2. We can see that our MDA achieves the best results on four dataset except the Sensor dataset, but MDA achieves the second best result on Sensor data set and only below the PDA. The PDA achieves the best result on Sensor data set and the second best results on other data sets. These results demonstrate that our uniform deep architectures achieve the good performance in most case. In addition, MDA not only outperform the traditional deep learning models, but also the shallow feature learning models. It shows that our deep architectures based on stacked some feature learning models can learn the better feature than shallow feature learning models.
4.3 EVALUATION
4.3.1 DIFFERENT STRUCTURES FOR MDA
In order to evaluate the desired structures of MDA, we changed the node’s number of the second layer. For USPS data set, we get rid of the second layer and the architecture was 256−128−64−32. Then, we set the number of node of the second layer was as twice as the input layer, the architecture was 256 − 128 − 64 − 32. Next, the number of node was as quadruple as the input layer, the architecture was 256− 1024− 512− 256− 128− 64− 32. Finally, the node’s number is as octuple as the input layer, the architecture was 256 − 2048 − 1024 − 512 − 256 − 128 − 64 − 32. The structures of other data sets are shown in Tab. 3.
The experimental results are shown in Tab. 4. When the the number of nodes of the second layer is as twice as the input layer, MDA achieved the minimum classification error on all data sets except the Covertype data set. When the number of nodes of the second layer is as quadruple as the input
layer, MDA get the worst result on Covertype data set. We can conclude that MDA can work well when the number of nodes of the second layer is as twice or quadruple as the input layer.
4.3.2 DIFFERENT NUMBER OF HIDDEN LAYERS FOR MDA
In order to evaluate how many hidden layers adapt to different datasets, we designed some experiments which have different number of hidden layers. We used 1 ∼ 7 hidden layers on USPS and Isolet datasets and 1 ∼ 5 hidden layers on Covertype, Sensor and Ibnsina datasets. The experimental settings were same as previous experiments.
Tab. 5 shows the classification error on 5 datasets with different hidden layers. All the datasets achieved the best results when hidden layer’s number is 3 except USPS dataset. The USPS dataset achieved the best result when hidden layer’s number is 5. As 1 ∼ 3 hidden layers, with the increase of the number of layers, the classification error is decreasing on all datasets. As small and middle scale applications, we don’t need very deep architectures to handle it. As large scale applications, we can design deeper architectures to achieve better performance.
4.4 CLASSIFICATION ON LARGE SCALE DATASET CIFAR-10
The previous section introduced the advantages of MDA on small and middle scale applications. In order to evaluate the universality of MDA, we chose a relatively large scale dataset CIFAR-10 to test the performance of MDA.
In our experiments, we first transformed the color images to gray images in order to reduce the dimensionality of input. Then we took one sample as a 1024 dimensional vector which is the input of our MDA. So, we can call this data set gray-CIFAR10. The architecture was set to 1024−2048− 1024−512−256−128−64, the minibatch’s size was set to 100, the dropout ratio and denoising ratio were set to 0.1, the number of epoch was set to 400, the learning rate was set to 1, the momentum was set to 0.5. We compared our MDA with previous 6 methods.
Table. 6(a) shows the classification error on gray-CIFAR10, we can see that PDA and MDA achieved the best results in these 7 methods. However, all of the methods on this framework didn’t perform well because we use the gray operation.
4.5 CLASSIFICATION ON CMU MOCAP DATA SET
CMU mocap data set is a very small dataset that only has 49 samples. Traditional deep learning methods didn’t work well in these kind of applications. We test our MDA and PDA and compared them with other 5 deep learning models. The architectures for all deep models (except the PDA) were set to 93− 186− 93− 47− 24. Specially, since the CMU mocap data set only has 49 samples, the PCA method only reduce the dimensionality to 49 at most, so the architecture of PDA was set to
93 − 186 − 24. The denoising ratio and dropout ratio were set to 0.1 on DAE, DAE with dropout, SDAE, SAE, PDA and MDA. The weight penalty on AE was set to 10−4. The learning rate was set to 0.01, the momentum was set to 0.5 and the number of epoch is set to 600. The experiment was test on 10-fold cross validation. The experimental results are shown in Tab. 6(b).
In Tab. 6(b), our PDA and MDA achieved the best results in this dataset and have lower standard deviation than other deep learning models. It demonstrates that our PDA and MDA are more stable than other deep learning models. The traditional autoencoder, SDAE, DAE with dropout achieved the same result in this dataset and better than SAE and DAE.
5 CONCLUSION
In this paper, we proposed a novel deep learning framework that based on stacked some feature learning models to handle small or middle data sets. Then we introduce MFA in this framework, called MDA. The deep learning tricks like backpropagation, denoising and dropout operation are applied on MDA to improve its performance. Extensive experiments on 7 different type data sets demonstrate that MDA performs not only better than shallow feature learning models, but also stateof-the-art deep learning models on small and middle scale applications. The evaluation of MDA show that how to adjust the parameters make the MDA work well. For future work, we plan to try other feature learning models and explore the different structures for this novel deep learning model. In addition, we plan to explore new deep architectures based on this framework to handle the large scale datasets. | 1. What is the main contribution of the proposed approach in the paper?
2. What are the concerns regarding the initialization strategy used in the approach?
3. How does the reviewer assess the clarity and motivation of the paper's description of the approach?
4. What are the issues with the experimental comparisons in the paper, according to the reviewer?
5. How can the authors improve the paper to make a more convincing case for their proposed approach? | Review | Review
The proposed approach consists in a greedy layer wise initialization strategy for a deep MLP model, which is followed by global gradient-descent with dropout for fine-tuning. The initialization strategy uses a first randomly initialized sigmoid layer for dimensionality expansion followed by 2 sigmoid layers whose weights are initialized by Marginal Fisher Analysis (MFA) which learns a linear dimensionality reduction based on a neighborhood graph constructed using class label information (i.e. supervised dimensionality reduction). Output layer is a standard softmax layer.
The approach is thus to be added to a growing list of heuristic layer-wise initialization schemes.
The particular choice of initialization strategy, while reasonable, is not sufficiently well motivated in the paper relative to alternatives, and thus feels rather arbitrary.
The paper lacks clarity in the description of the approach: MFA is poorly explained with undefined notations (in Eq. 4, what is A? It has not been properly defined); the precise use of alluded denoising in the model is also unclear (is there really training of an additional denoting objective, or just input corruption?).
The question of the (arguably mild) inconsistency of applying a linear dimensionality reduction algorithm, that is trained without any sigmoid, and then passing its learned representation through a sigmoid is not even raised. This, in addition to the fact that sigmoid hidden layers are no longer commonly used (why did you not also consider using RELUs?).
More importantly I suspect methodological problems with the experimental comparisons: the paper mentions using *default* values for learning-rate and momentum, and having (arbitrarily?) fixed epoch to 400 (no early stopping?) and L2 regularization to 1e-4 for some models.
*All* hyper parameters should always be properly hyper-optimized using a validation set (or cross-validation) including early-stopping, and this separately for each model under comparison (ideally also including layer sizes). This is all the more important since you are considering smallish datasets, so that the various initialization strategies act mainly as different indirect regularization schemes: they thus need to be carefully tuned. This casts serious doubts as to the amount of hyper-parameter tuning (close to none?) that went into training the alternative models used for comparison.
The Marginal Fisher Analysis dimensionality reduction initialization strategy may well offer advantages, but as it currently stands this paper doesn’t yet make a sufficiently convincing case for it, nor provide useful insights into the nature of the expected advantages.
I would also suggest, for image inputs such as CIFAR10, to use the qualitative tool of showing the filters (back projected to input space) learned by the different initialization schemes under consideration, as this could help visually gain insight as to what sets methods apart. |
ICLR | Title
Marginal Deep Architectures: Deep learning for Small and Middle Scale Applications
Abstract
In recent years, many deep architectures have been proposed in different fields. However, to obtain good results, most of the previous deep models need a large number of training data. In this paper, for small and middle scale applications, we propose a novel deep learning framework based on stacked feature learning models. Particularly, we stack marginal Fisher analysis (MFA) layer by layer for the initialization of the deep architecture and call it “Marginal Deep Architectures” (MDA). In the implementation of MDA, the weight matrices of MFA are first learned layer by layer, and then we exploit some deep learning techniques, such as back propagation, dropout and denoising to fine tune the network. To evaluate the effectiveness of MDA, we have compared it with some feature learning methods and deep learning models on 7 small and middle scale real-world applications, including handwritten digits recognition, speech recognition, historical document understanding, image classification, action recognition and so on. Extensive experiments demonstrate that MDA performs not only better than shallow feature learning models, but also state-of-the-art deep learning models in these applications.
1 INTRODUCTION
Deep learning methods have achieved desirable performance in many domains, such as image classification and detection, document analysis and recognition, natural language processing, video analysis (Krizhevsky et al., 2012; Chan et al., 2014; Ciresan et al., 2010; Collobert & Weston, 2008; Le et al., 2011). Deep learning methods learn the data representation by using multiple processing layers, which discover the intricate structure of high dimensional data with multiple levels of abstraction (LeCun et al., 2015). For example, for face recognition, the learned features of first layer may be the edges, directions and some local information. The second layer typically detects some object parts which are combination of edges and directions. The higher layers may further abstract the face image by combining the features of previous layers (outline of the eyes, nose, lips). This procedure is very similar with human visual and perceptual system.
In recently years, many deep learning methods have been proposed (l. Boureau & others, 2008; Lee et al., 2009b;a; Hinton & Salakhutdinov, 2006). However, most models meet some difficult problems to solve, such as some parameters need to be randomly initialized, like the weight matrix of two successive layers in deep belief networks (DBNs) and the convolution kernel in convolutional neural networks (CNNs). In addition, traditional deep learning methods need a large scale training data to train the complex networks. It causes many problems in the training process. If we don’t initialize the parameters properly, the optimization procedure might need a long training time and fall into local minima. Alternatively, many feature learning models have been proposed to learn the intrinsic structures of high-dimensional data and avoid the curse of dimensionality. In particular, most of them can be trained with small and middle scale of data and their learning algorithms are generally based on closed-form solution or convex optimization. For instance, marginal Fisher analysis (MFA) (Yan et al., 2007; Zhong et al., 2013) is one of the feature learning models that is a supervised method based on the graph embedding framework. It utilizes an intrinsic graph to characterize the intraclass compactness, and another penalty graph to characterize the interclass separability. Its optimal solution can be learned by generalized eigenvalue decomposition. However,
on the one hand, shallow feature learning models cannot work well on the data with highly nonlinear structure; on the other hand, few efforts are made to combine shallow feature learning models for the design of deep architectures.
In order to simultaneously solve the existing problems in deep learning methods and combine the advantages of feature learning models, we proposed a novel deep learning method based on stacked feature learning models. Particularly, we stack marginal Fisher analysis (MFA) layer by layer for the initialization of the deep architecture and call it “Marginal Deep Architectures” (MDA). Firstly, the input data are mapped to higher dimensional space by using random weight matrix. Then we use MFA to learn the lower dimensional representation layer by layer. In the implementation of this architecture, we add some tricks in the training process, such as back propagation, dropout and denoising to fine tune the network. Finally, the softmax layer is connected to the last feature layer. We have compared our MDA with some feature learning methods and deep learning models on different domains of datasets (including handwritten digits recognition, speech recognition, historical document understanding, image classification, action recognition and so on). Extensive experiments demonstrate that MDA performs not only better than shallow feature learning models, but also state-of-the-art deep learning models in small and middle scale applications.
The contributions of this work are highlighted as follows.
1. We propose a novel structure to build a deep architecture. The first hidden layer has twice or quadruple neurons as the input layer. Then we can use some feature learning models layer by layer to learn the compact representations of data. Finally, we set the last layer as a softmax classifier.
2. Traditional deep learning models in general need a large scale training data. Compared with traditional deep learning models, MDA can work better than traditional deep learning models in small and middle scale applications because the initialization of the weight matrices using MFA is much better than that using random initialization.
3. Our MDA can work well in different domains of datasets, such as handwritten digits, spoken letters and natural images. Extensive experiments demonstrate that MDA is a general model to handel small and middle scale data. On the other hand, for large scale datasets, like CIFAR-10, MDA works comparatively with other deep learning methods.
The rest of this paper is organized as follows: In Section 2, we give a brief overview of related work. In Section 3, we present the marginal Fisher analysis (MFA) and the proposed marginal deep architectures (MDA) in detail. The experimental settings and results are reported in Section 4, while Section 5 concludes this paper with remarks and future work.
2 RELATED WORK
With the development of deep learning methods, many deep networks have been proposed in recent years (Donahue et al., 2013; Krizhevsky et al., 2012; Long et al., 2015; Zhou et al., 2014). These deep learning models show their powerful performance in various fields, such as image classification and analysis, document analysis and recognition, natural language processing et al. In the area of image analysis, Hinton et al. proposed a large, deep convolutional neural network (Alex net) to classify the 1.2 million high-resolution images in the ImageNet. It uses efficient GPU to speed their method. The results show that a large, deep convolutional neural network is capable of achieving recordbreaking results on a highly challenging dataset using purely supervised learning (Krizhevsky et al., 2012). In order to popularize the deep convolutional neural network, Donahue ea al. proposed DeCAF (Deep Convolutional Activation Feature) which is trained in a fully supervised fashion on a large, fixed set of object recognition tasks (Donahue et al., 2013). DeCAF provides a uniform framework for researchers who can improve and change this framework on some specific tasks. However, its performance at scene recognition has not attained the same level of success. In order to handle this problem, Zhou et al. introduce a new scene-centric database called Places with over 7 million labeled pictures of scenes. Then, they learn the deep features for scene recognition tasks by using the same architecture as ImageNet, and establish new state-of-the-art results on several scenecentric datasets (Zhou et al., 2014). However, these methods based on convolutional operation need very large scale training samples and a long training time. They can not work well on small and middle scale applications.
In other domains, deep learning methods also achieve good performance. Hinton et al. represent the shared views of four research groups that have had recent successes in using DNNs for automatic speech recognition (ASR). The DNNs that contain many layers of nonlinear hidden units and a very large output layer can outperform Gaussian mixture models (GMMs) at acoustic modeling for speech recognition on a variety of data sets (Hinton et al., 2012a). In the area of genetics, Xiong et al. use “deep learning” computer algorithms to derive a computational model that takes as input DNA sequences and applies general rules to predict splicing in human tissues (Xiong et al., 2015). It reveals the genetic origins of disease and how strongly genetic variants affect RNA splicing. In the area of natural language understanding, deep learning models have delivered strong results on topic classification, sentiment analysis et al. Sutskever et al. proposed a general approach, the Long Short-Term Memory (LSTM) architecture which can solve the general sequence to sequence problems better than before (Sutskever et al., 2014). In addition, Hinton et al. proposed autoencoder (AE) networks that is an effective way to learn the low-dimensional codes of high-dimensional data. Based on autoencoder, there are also have many excellent works to handle various tasks. Vincent et al. proposed a denoising autoencoder (DAE) which maked the learned representations robust to partial corruption of the input data (Vincent et al., 2008). The denoising autoencoder which initialize the deep architectures layer by layer is very similar with human visual system. Hinton et al. introduced random ‘dropout’ to prevent the overfitting which improve many benchmark tasks and obtain new records for speech and object recognition (Hinton et al., 2012b). Then, Vincent et al. proposed stacked denoising autoencoders (SDAE) which based on stacking layers of stacked denoising autoencoders (Vincent et al., 2010). It is very useful to learn the higher level representations and work well on natural images and handwritten digits. However, for the same reason, they also need a large scale training set and a long training time. They have no advantages to handle the small and middle scale applications.
Moreover, in the field of feature learning models, dimensionality reduction plays a crucial role to handle the problems for compressing, visualizing high-dimensional data and avoiding the “curse of dimensionality” (van der Maaten et al., 2009; van der Maaten, 2007). Traditional dimensionality reduction mainly can be classified into three types: linear or nonlinear, like principal components analysis (PCA) (Jolliffe, 2002) and linearity preserving projection (LPP) (Niyogi, 2004) are linear methods, stochastic neighbor embedding (SNE) (Hinton & Roweis, 2002) is a nonlinear method; supervised or unsupervised, such as marginal Fisher analysis (MFA) (Yan et al., 2007; Zhong et al., 2013) and linear discriminant analysis (LDA) (Fisher, 1936) are supervised methods, PCA is an unsupervised method; local or global, like MFA and SNE are local methods, PCA is a global method. Many feature learning models based on geometry theory provide different solutions to the problem of dimensionality reduction. Yan et al. proposed a general formulation about graph embedding framework can exploit new dimensionality reduction algorithms (Yan et al., 2007). If only directly use some feature learning models to extract the good representation from original data, it often eventually couldn’t get a good outcome. Considering this situation, we try to choose some excellent feature learning models and combine them with some deep learning algorithms. MFA is one special formulation of the graph embedding models based on this framework. It utilizes an intrinsic graph to characterize the intraclass compactness, and another penalty graph to characterize the interclass separability. Our motivation is to combine the advantage of MFA and deep architectures and propose a new initialization method for deep learning algorithms.
There are also have some excellent works about feature learning models combined the deep architectures (Yuan et al.; George et al., 2014; Ngiam et al., 2011). Yuan et al. proposed an improved multilayer learning model to solve the scene recognition task (Yuan et al.). This model overcome the limitation of shallow, one-layer representations for scene recognition. Trigeorgis et al proposed deep Semi-NMF, that is able to learn such hidden representations from different, unknown attributes of a given dataset (George et al., 2014). Ngiam proposed a deep architectures to learn features over multiple modalities (Ngiam et al., 2011). They showed that multi-modality feature learning is better than one modality and achieved good performance on video and audio datasets. However, in general, we can only obtain data from one modality. In this work, we combine the advantages of MFA and deep architectures, which based on stacked feature learning models (Zheng et al., 2014; 2015), then we use some deep learning tricks, like back propagation, denoising and dropout to fine tuning the network. The advantage of this deep architecture is that we can learn the desirable weight matrix even if the training data is not large enough. And compared with traditional deep learning models and shallow feature learning models, our MDA achieved state-of-the-art results in most cases.
3 MARGINAL DEEP ARCHITECTURES (MDA)
In this section, we firstly introduce a novel framework of deep architectures, then we introduce marginal Fisher analysis (MFA) and the proposed marginal deep architectures (MDA) in detail. In addition, we also present some deep learning tricks that we used in the MDA model, including back propagation, denoising and dropout.
3.1 A NOVEL FRAMEWORK OF DEEP ARCHITECTURES
The feature learning problem is generally formulated as follow. Given n data, {xT1 , . . . ,xTn} ∈ <D, where D is the dimensionality of the data space, we seeks the compact representations of these data, i.e., {yT1 , . . . ,yTn } ∈ <d, where d is the dimensionality of the low dimensional embeddings. In order to improve the accuracy of shallow feature learning models, we use stacked feature learning models to construct the deep architectures (Zheng et al., 2014; 2015), which is a general framework for different applications. In this case, the mapping of data from the original D-dimensional space to the resulted d-dimensional space can be described as
D =⇒ D1 =⇒ · · · =⇒ Di =⇒ · · · =⇒ Dp−1 =⇒ d, (1) where D1 is the first higher dimensional space, the number of the node is twice or quadruple as the input layer. Di represents the dimensionality of the i-th intermediate representation space, and p is the total steps of mappings. Here, we can use different feature learning models for the learning of each layer. As the feature learning models are optimized layer by layer, we can obtain the mapping functions between successive layers. The first hidden layer is random by Wr1, and the representation is, a1 = g(WTr1x+ b) (2) where, g(.) is a non-linear activation or transfer function. Then, we can use some feature learning models to initialize the next layers. The representations of next hidden layers are,
ak = g(WTFk−1a k−1 + b) (3)
where, WFk−1 is the weight matrix of the k − 1th layer learned from feature learning models.
3.2 MARGINAL FISHER ANALYSIS (MFA)
Based on our novel framework of deep architecture, we introduce Marginal Fisher Analysis (MFA) to build MDA. Here, many traditional feature learning models, such as linear discriminant analysis
(LDA), can be used as building blocks of MDA. Take LDA as an example. It assumes that the data of each class follow a Gaussian distribution. However, this assumption is not often satisfied in the real world. Without this assumption, LDA can not work well to separate the data with nonlinear structure. Alternatively, MFA can solve this problem effectively. Hence, considering the learning capability, we choose MFA as the build blocks of MDA in our work. MFA used the graph embedding framework to set up an intrinsic graph that characterizes the intraclass compactness and another penalty graph which characterizes the interclass separability. The marginal Fisher criterion is defined as
W∗ = argmin W tr(WTX(D−A)XTW) tr(WTX(Dp −Ap)XTW)
(4)
where D and Dp are diagonal matrices with elements Dii = ∑ j Aij , and D p ij = ∑ j A p ij , respectively. Then we can learn the projection matrix to multiply PCA’s projection and marginal Fisher projection,
WMFA = WPCAW ∗ (5)
3.3 MARGINAL DEEP ARCHITECTURES (MDA)
In order to combine the advantages of MFA and proposed deep architectures, we propose the marginal deep architectures (or MDA). The MDA inherited from the proposed novel framework of deep architectures is shown in Fig. 1. As an input vector x ∈ [0, 1]d, we first map it to higher dimensional space by a random weight matrix Wr1. The representation of first hidden layer is computed as
a1 = s(WTr1x+ b) (6)
where, s(.) is the sigmoid function s(x) = 11+e−x , b is the bias terms, a 1 is the output of first layer. From second layer to (n − 1)-th layer, we use the weight matrices learned from MFA to map layer by layer.
ak = s(WTMFAk−1a k−1 + b) (7)
The last layer is a softmax regression layer and the number of neuron is the number of category. The cost function is defined as,
J(w) = − 1 N ( N∑ i=1 K∑ j=1 I(yi = j) log exp(wTj a n−1 i )∑K l=1 exp(w T l a n−1 i ) ) (8)
where, I(x) is the indicator function, I(x) = 1 if x is true, else I(x) = 0. yi is the label corresponding to xi. Then the probability that xi is classified to j is,
p(yi = j|xi,w) = exp(wTj a n−1 i )∑K
l=1 exp(w T l a n−1 i )
(9)
Taking derivatives, one can show that the gradient is,
∇J(w) = − 1 N N∑ i=1 [xi(I(yi = j)− p(yi = j|xi,w)] (10)
If the n−1 layer’s neurons are more than the last layer, we can continue using MFA to map it. On the contrary, If the n − 1 layer’s neurons are less than last layer, we can randomly initialize the weight matrix between this two layers. Next, in order to improve the MDA, we introduce back propagation, denoising and dropout operation.
3.4 BACK PROPAGATION
In order to adjust the network, we use back propagation (Rumelhart et al., 1986) to compute partial derivative and stochastic gradient descent to update the weight matrixes and the bias terms. For each node i in output layer (n-th layer), we compute an error term as
δni = ∇J(w) (11)
where, J(w) is the cost function computed from Equ.8 and ∇J(w) computed from Equ.10. For each node i in (n− 1)-th to second layer, the error term is computed as,
δki = ( k+1∑ j=1 wkjiδ k+1 j )s ′(zki ) (12)
The back propagation procedure relies on computing the gradient of an objective function with respect to the weights of a multilayer stacked modules. It starting from the output at the top and end to the input at the bottom.
3.5 DENOISING OPERATION
Vincent et al. proposed the denoising autoencoder to improve the robustness of autoencoder (Vincent et al., 2008). It’s very similar with the regularization methods and avoids the “overfitting” problem. The basic idea is to corrupt partial input data by the desired proportion of ν “destruction”. for each input x, a fixed number νd of components are chosen at random, and their value is forced to 0, while the others are left untouched. The initial input x to get a partially destroyed version x̃ by means of a stochastic mapping, x̃ ∼ qD(x̃|x) (13) where, qD(x̃|x) is the unknown distribution. Then, for a hidden representation h,
h = s(WT x̃+ b) (14)
In our MDA, we use this idea to improve the network, please refer to Fig. 1 to find clear sight. For the input layer, the output of first hidden layer is represented as,
a2 = s(WTr1 x̃+ b1) (15)
where, Wr1 is the first layer random weight matrix, b1 is the bias term of first layer. The “denoising” operation is established to a hypothetical additional specific criterion: robustness to partial destruction of the input, which means a good intermediate representation is learned from unknown distribution of its observed input. This operation helps for learning more stable structure and avoids the overfitting problem in most cases.
3.6 DROPOUT
As the same reason with denoising operation, dropout is a trick to prevent overfitting (Hinton et al., 2012b). When a large feedforward neural network is trained on a small training set, dropout performed well on test set. In order to prevent the complex co-adaptations on the training data, the basic idea of dropout is that each hidden node is randomly omitted from the network with a probability of β, so a hidden node can’t rely on other hidden node. In another view, dropout is as a very efficient way of performing model averaging with neural networks. On test set, we train many separate networks and then to apply each of these networks to the test data. Dropout operation can save the train time and then we average the predictions produced by a very large number of different networks. Fig. 1 shows the dropout operation in our MDA.
4 EXPERIMENTS
4.1 DATESET DESCRIPTIONS
We evaluate the performance of MDA on five benchmark data sets. The detail of the data is showed in Tab 1. The USPS 1 data set is a handwritten digits image data set includes 7291 training samples and 2007 test samples from 10 classes with 256 dimensional features. This task is to recognize the digits 0 to 9. The Isolet 2 data set is a collection of audio feature vectors of spoken letters from the English alphabet. It includes 6238 training samples and 1559 test samples from 26 classes with 614 dimensional features. The task is to identify which letter is spoken based on the recorded
1http://www.gaussianprocess.org/gpml/data/ 2http://archive.ics.uci.edu/ml/datasets/ISOLET
(and pre-processed) audio signal. Sensor 3 is a sensorless drive diagnosis data set includes 46816 training samples and 11693 test samples from 11 classes with 48 dimensional features. The features are extracted from electric current drive signals. The task is to classify 11 different classes with different conditions of the drive which has intact and defective components. Covertype 4 contains geological and map-based data from four wilderness areas located in the Roosevelt National Forest of northern Colorado. It includes 15120 training samples and 565892 test samples from 7 classes with 54 dimensional features. The task is to identify forest cover type from cartographic variables. For the IbnSina 5 ancient Arabic document data set, we use 50 pages of the manuscript for training (17543 training samples) and 10 pages for testing (3125 test samples). The data samples belong to 174 classes of subwords and are of dimensionality 200.
In addition, we also use a large scale dataset CIFAR-10 6 to test our MDA on large scale applications. The CIFAR-10 dataset consists of 60000 32 × 32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. We also test our MDA on a specific task which use the CMU motion capture (CMU mocap) data set 7. The CMU mocap data set includes three categories, namely, jumping, running and walking. We choose 49 video sequences from four subjects. For each sequence, the features are generated using Lawrences method 8, with dimensionality 93 (Zhong et al., 2010). By reason of the few samples of CMU, we adopt 10-fold cross-validation in our experiments and use the average error rate and standard deviation to evaluate the performance.
4.2 CLASSIFICATION ON FIVE BENCHMARK DATA SETS
4.2.1 BASELINE METHODS
In order to evaluate the performance of MDA, we compared our MDA with 5 deep learning models include autoencoder (AE) (Hinton & Salakhutdinov, 2006), stacked autoencoders, denoising autoencoders (Vincent et al., 2008), stacked denoising autoencoders (Vincent et al., 2010) and stacked denoising autoencoders with dropout, 2 feature learning models, MFA (Zhong et al., 2013; Yan et al., 2007) and PCA (Jolliffe, 2002), PCA deep architecture base on our uniform framework and the classification accuracy on original space.
4.2.2 EXPERIMENTAL SETTINGS
All of the deep learning methods have the same settings. The size of minibatch was set to 100, the learning rate and momentum were the default value 1 and 0.5, the number of epoch was set to 400, the dropout rate and denoising rate ν were set to 0.1. For the AE and SAE, weight penalty of the L2 norm was set to 10−4. For MFA, the number of nearest neighbors for constructing the intrinsic graph was set to 5, while that for constructing the penalty graph was set to 20. The target spaces of MFA and PCA on different data sets were showed in Tab 1. For the USPS data set, The architecture was set to 256 − 512 − 256 − 128 − 64 − 32. For the Isolet data set ,the architecture was set to 617 − 1324 − 617 − 308. For the Sensor data set, the architecture was set to 48 − 96 − 48 − 24.
3http://archive.ics.uci.edu/ml/datasets/Dataset+for+Sensorless+Drive+Diagnosis# 4http://archive.ics.uci.edu/ml/datasets/Covertype 5http://www.causality.inf.ethz.ch/al data/IBN SINA.html 6http://www.cs.toronto.edu/ kriz/cifar.html 7http://http://mocap.cs.cmu.edu/ 8http://is6.cs.man.ac.uk/∼neill/mocap/
For the Covertype data set, we set the architecture to 54− 216− 108− 54− 27. Finally, for Ibnsina data set, the architecture was set to 200− 400− 200− 100.
4.2.3 CLASSIFICATION RESULTS
The experimental results are shown in Tab. 2. We can see that our MDA achieves the best results on four dataset except the Sensor dataset, but MDA achieves the second best result on Sensor data set and only below the PDA. The PDA achieves the best result on Sensor data set and the second best results on other data sets. These results demonstrate that our uniform deep architectures achieve the good performance in most case. In addition, MDA not only outperform the traditional deep learning models, but also the shallow feature learning models. It shows that our deep architectures based on stacked some feature learning models can learn the better feature than shallow feature learning models.
4.3 EVALUATION
4.3.1 DIFFERENT STRUCTURES FOR MDA
In order to evaluate the desired structures of MDA, we changed the node’s number of the second layer. For USPS data set, we get rid of the second layer and the architecture was 256−128−64−32. Then, we set the number of node of the second layer was as twice as the input layer, the architecture was 256 − 128 − 64 − 32. Next, the number of node was as quadruple as the input layer, the architecture was 256− 1024− 512− 256− 128− 64− 32. Finally, the node’s number is as octuple as the input layer, the architecture was 256 − 2048 − 1024 − 512 − 256 − 128 − 64 − 32. The structures of other data sets are shown in Tab. 3.
The experimental results are shown in Tab. 4. When the the number of nodes of the second layer is as twice as the input layer, MDA achieved the minimum classification error on all data sets except the Covertype data set. When the number of nodes of the second layer is as quadruple as the input
layer, MDA get the worst result on Covertype data set. We can conclude that MDA can work well when the number of nodes of the second layer is as twice or quadruple as the input layer.
4.3.2 DIFFERENT NUMBER OF HIDDEN LAYERS FOR MDA
In order to evaluate how many hidden layers adapt to different datasets, we designed some experiments which have different number of hidden layers. We used 1 ∼ 7 hidden layers on USPS and Isolet datasets and 1 ∼ 5 hidden layers on Covertype, Sensor and Ibnsina datasets. The experimental settings were same as previous experiments.
Tab. 5 shows the classification error on 5 datasets with different hidden layers. All the datasets achieved the best results when hidden layer’s number is 3 except USPS dataset. The USPS dataset achieved the best result when hidden layer’s number is 5. As 1 ∼ 3 hidden layers, with the increase of the number of layers, the classification error is decreasing on all datasets. As small and middle scale applications, we don’t need very deep architectures to handle it. As large scale applications, we can design deeper architectures to achieve better performance.
4.4 CLASSIFICATION ON LARGE SCALE DATASET CIFAR-10
The previous section introduced the advantages of MDA on small and middle scale applications. In order to evaluate the universality of MDA, we chose a relatively large scale dataset CIFAR-10 to test the performance of MDA.
In our experiments, we first transformed the color images to gray images in order to reduce the dimensionality of input. Then we took one sample as a 1024 dimensional vector which is the input of our MDA. So, we can call this data set gray-CIFAR10. The architecture was set to 1024−2048− 1024−512−256−128−64, the minibatch’s size was set to 100, the dropout ratio and denoising ratio were set to 0.1, the number of epoch was set to 400, the learning rate was set to 1, the momentum was set to 0.5. We compared our MDA with previous 6 methods.
Table. 6(a) shows the classification error on gray-CIFAR10, we can see that PDA and MDA achieved the best results in these 7 methods. However, all of the methods on this framework didn’t perform well because we use the gray operation.
4.5 CLASSIFICATION ON CMU MOCAP DATA SET
CMU mocap data set is a very small dataset that only has 49 samples. Traditional deep learning methods didn’t work well in these kind of applications. We test our MDA and PDA and compared them with other 5 deep learning models. The architectures for all deep models (except the PDA) were set to 93− 186− 93− 47− 24. Specially, since the CMU mocap data set only has 49 samples, the PCA method only reduce the dimensionality to 49 at most, so the architecture of PDA was set to
93 − 186 − 24. The denoising ratio and dropout ratio were set to 0.1 on DAE, DAE with dropout, SDAE, SAE, PDA and MDA. The weight penalty on AE was set to 10−4. The learning rate was set to 0.01, the momentum was set to 0.5 and the number of epoch is set to 600. The experiment was test on 10-fold cross validation. The experimental results are shown in Tab. 6(b).
In Tab. 6(b), our PDA and MDA achieved the best results in this dataset and have lower standard deviation than other deep learning models. It demonstrates that our PDA and MDA are more stable than other deep learning models. The traditional autoencoder, SDAE, DAE with dropout achieved the same result in this dataset and better than SAE and DAE.
5 CONCLUSION
In this paper, we proposed a novel deep learning framework that based on stacked some feature learning models to handle small or middle data sets. Then we introduce MFA in this framework, called MDA. The deep learning tricks like backpropagation, denoising and dropout operation are applied on MDA to improve its performance. Extensive experiments on 7 different type data sets demonstrate that MDA performs not only better than shallow feature learning models, but also stateof-the-art deep learning models on small and middle scale applications. The evaluation of MDA show that how to adjust the parameters make the MDA work well. For future work, we plan to try other feature learning models and explore the different structures for this novel deep learning model. In addition, we plan to explore new deep architectures based on this framework to handle the large scale datasets. | 1. What are the limitations of existing deep architectures that the authors aim to address?
2. What is the proposed method to overcome these limitations?
3. How does the proposed method compare to other feature learning methods on small to mid-size datasets?
4. What are the existing techniques in deep learning that the authors apply to improve performance?
5. Are there any concerns regarding the theoretical or empirical justification for stacking MFAs?
6. How do the authors determine the particular model architecture and hyperparameters used in each dataset?
7. What is the significance of the contributions made by the paper?
8. Are there any issues with the clarity or omission of details in the writing of the paper? | Review | Review
The authors pointed out some limitations of existing deep architectures, in particular hard to optimize on small or mid size datasets, and proposed to stack marginal fisher analysis (MFA) to build deep models. The proposed method is tested on several small to mid size datasets and compared with several feature learning methods. The authors also applied some existing techniques in deep learning, such as backprop, denoising and dropout to improve performance.
The new contribution of the paper is limited. MFA has long been proposed. The authors fail to theoretically or empirically justify the stacking of MFAs. The authors did not include any deep architectures that requires backprop over multiple layers in the comparison, which the authors set out to address, instead all the methods compared were learned layer by layer. Will a randomly initialized deep model such as DBN or CNN perform poorly on these datasets? It is also not clear how the authors came up with each particular model architecture and hyper-parameters used in the different datasets. The writing of the paper needs to be significantly improved. A lot of details were omitted, for example, how is dropout applied in the MFA. |
ICLR | Title
Marginal Deep Architectures: Deep learning for Small and Middle Scale Applications
Abstract
In recent years, many deep architectures have been proposed in different fields. However, to obtain good results, most of the previous deep models need a large number of training data. In this paper, for small and middle scale applications, we propose a novel deep learning framework based on stacked feature learning models. Particularly, we stack marginal Fisher analysis (MFA) layer by layer for the initialization of the deep architecture and call it “Marginal Deep Architectures” (MDA). In the implementation of MDA, the weight matrices of MFA are first learned layer by layer, and then we exploit some deep learning techniques, such as back propagation, dropout and denoising to fine tune the network. To evaluate the effectiveness of MDA, we have compared it with some feature learning methods and deep learning models on 7 small and middle scale real-world applications, including handwritten digits recognition, speech recognition, historical document understanding, image classification, action recognition and so on. Extensive experiments demonstrate that MDA performs not only better than shallow feature learning models, but also state-of-the-art deep learning models in these applications.
1 INTRODUCTION
Deep learning methods have achieved desirable performance in many domains, such as image classification and detection, document analysis and recognition, natural language processing, video analysis (Krizhevsky et al., 2012; Chan et al., 2014; Ciresan et al., 2010; Collobert & Weston, 2008; Le et al., 2011). Deep learning methods learn the data representation by using multiple processing layers, which discover the intricate structure of high dimensional data with multiple levels of abstraction (LeCun et al., 2015). For example, for face recognition, the learned features of first layer may be the edges, directions and some local information. The second layer typically detects some object parts which are combination of edges and directions. The higher layers may further abstract the face image by combining the features of previous layers (outline of the eyes, nose, lips). This procedure is very similar with human visual and perceptual system.
In recently years, many deep learning methods have been proposed (l. Boureau & others, 2008; Lee et al., 2009b;a; Hinton & Salakhutdinov, 2006). However, most models meet some difficult problems to solve, such as some parameters need to be randomly initialized, like the weight matrix of two successive layers in deep belief networks (DBNs) and the convolution kernel in convolutional neural networks (CNNs). In addition, traditional deep learning methods need a large scale training data to train the complex networks. It causes many problems in the training process. If we don’t initialize the parameters properly, the optimization procedure might need a long training time and fall into local minima. Alternatively, many feature learning models have been proposed to learn the intrinsic structures of high-dimensional data and avoid the curse of dimensionality. In particular, most of them can be trained with small and middle scale of data and their learning algorithms are generally based on closed-form solution or convex optimization. For instance, marginal Fisher analysis (MFA) (Yan et al., 2007; Zhong et al., 2013) is one of the feature learning models that is a supervised method based on the graph embedding framework. It utilizes an intrinsic graph to characterize the intraclass compactness, and another penalty graph to characterize the interclass separability. Its optimal solution can be learned by generalized eigenvalue decomposition. However,
on the one hand, shallow feature learning models cannot work well on the data with highly nonlinear structure; on the other hand, few efforts are made to combine shallow feature learning models for the design of deep architectures.
In order to simultaneously solve the existing problems in deep learning methods and combine the advantages of feature learning models, we proposed a novel deep learning method based on stacked feature learning models. Particularly, we stack marginal Fisher analysis (MFA) layer by layer for the initialization of the deep architecture and call it “Marginal Deep Architectures” (MDA). Firstly, the input data are mapped to higher dimensional space by using random weight matrix. Then we use MFA to learn the lower dimensional representation layer by layer. In the implementation of this architecture, we add some tricks in the training process, such as back propagation, dropout and denoising to fine tune the network. Finally, the softmax layer is connected to the last feature layer. We have compared our MDA with some feature learning methods and deep learning models on different domains of datasets (including handwritten digits recognition, speech recognition, historical document understanding, image classification, action recognition and so on). Extensive experiments demonstrate that MDA performs not only better than shallow feature learning models, but also state-of-the-art deep learning models in small and middle scale applications.
The contributions of this work are highlighted as follows.
1. We propose a novel structure to build a deep architecture. The first hidden layer has twice or quadruple neurons as the input layer. Then we can use some feature learning models layer by layer to learn the compact representations of data. Finally, we set the last layer as a softmax classifier.
2. Traditional deep learning models in general need a large scale training data. Compared with traditional deep learning models, MDA can work better than traditional deep learning models in small and middle scale applications because the initialization of the weight matrices using MFA is much better than that using random initialization.
3. Our MDA can work well in different domains of datasets, such as handwritten digits, spoken letters and natural images. Extensive experiments demonstrate that MDA is a general model to handel small and middle scale data. On the other hand, for large scale datasets, like CIFAR-10, MDA works comparatively with other deep learning methods.
The rest of this paper is organized as follows: In Section 2, we give a brief overview of related work. In Section 3, we present the marginal Fisher analysis (MFA) and the proposed marginal deep architectures (MDA) in detail. The experimental settings and results are reported in Section 4, while Section 5 concludes this paper with remarks and future work.
2 RELATED WORK
With the development of deep learning methods, many deep networks have been proposed in recent years (Donahue et al., 2013; Krizhevsky et al., 2012; Long et al., 2015; Zhou et al., 2014). These deep learning models show their powerful performance in various fields, such as image classification and analysis, document analysis and recognition, natural language processing et al. In the area of image analysis, Hinton et al. proposed a large, deep convolutional neural network (Alex net) to classify the 1.2 million high-resolution images in the ImageNet. It uses efficient GPU to speed their method. The results show that a large, deep convolutional neural network is capable of achieving recordbreaking results on a highly challenging dataset using purely supervised learning (Krizhevsky et al., 2012). In order to popularize the deep convolutional neural network, Donahue ea al. proposed DeCAF (Deep Convolutional Activation Feature) which is trained in a fully supervised fashion on a large, fixed set of object recognition tasks (Donahue et al., 2013). DeCAF provides a uniform framework for researchers who can improve and change this framework on some specific tasks. However, its performance at scene recognition has not attained the same level of success. In order to handle this problem, Zhou et al. introduce a new scene-centric database called Places with over 7 million labeled pictures of scenes. Then, they learn the deep features for scene recognition tasks by using the same architecture as ImageNet, and establish new state-of-the-art results on several scenecentric datasets (Zhou et al., 2014). However, these methods based on convolutional operation need very large scale training samples and a long training time. They can not work well on small and middle scale applications.
In other domains, deep learning methods also achieve good performance. Hinton et al. represent the shared views of four research groups that have had recent successes in using DNNs for automatic speech recognition (ASR). The DNNs that contain many layers of nonlinear hidden units and a very large output layer can outperform Gaussian mixture models (GMMs) at acoustic modeling for speech recognition on a variety of data sets (Hinton et al., 2012a). In the area of genetics, Xiong et al. use “deep learning” computer algorithms to derive a computational model that takes as input DNA sequences and applies general rules to predict splicing in human tissues (Xiong et al., 2015). It reveals the genetic origins of disease and how strongly genetic variants affect RNA splicing. In the area of natural language understanding, deep learning models have delivered strong results on topic classification, sentiment analysis et al. Sutskever et al. proposed a general approach, the Long Short-Term Memory (LSTM) architecture which can solve the general sequence to sequence problems better than before (Sutskever et al., 2014). In addition, Hinton et al. proposed autoencoder (AE) networks that is an effective way to learn the low-dimensional codes of high-dimensional data. Based on autoencoder, there are also have many excellent works to handle various tasks. Vincent et al. proposed a denoising autoencoder (DAE) which maked the learned representations robust to partial corruption of the input data (Vincent et al., 2008). The denoising autoencoder which initialize the deep architectures layer by layer is very similar with human visual system. Hinton et al. introduced random ‘dropout’ to prevent the overfitting which improve many benchmark tasks and obtain new records for speech and object recognition (Hinton et al., 2012b). Then, Vincent et al. proposed stacked denoising autoencoders (SDAE) which based on stacking layers of stacked denoising autoencoders (Vincent et al., 2010). It is very useful to learn the higher level representations and work well on natural images and handwritten digits. However, for the same reason, they also need a large scale training set and a long training time. They have no advantages to handle the small and middle scale applications.
Moreover, in the field of feature learning models, dimensionality reduction plays a crucial role to handle the problems for compressing, visualizing high-dimensional data and avoiding the “curse of dimensionality” (van der Maaten et al., 2009; van der Maaten, 2007). Traditional dimensionality reduction mainly can be classified into three types: linear or nonlinear, like principal components analysis (PCA) (Jolliffe, 2002) and linearity preserving projection (LPP) (Niyogi, 2004) are linear methods, stochastic neighbor embedding (SNE) (Hinton & Roweis, 2002) is a nonlinear method; supervised or unsupervised, such as marginal Fisher analysis (MFA) (Yan et al., 2007; Zhong et al., 2013) and linear discriminant analysis (LDA) (Fisher, 1936) are supervised methods, PCA is an unsupervised method; local or global, like MFA and SNE are local methods, PCA is a global method. Many feature learning models based on geometry theory provide different solutions to the problem of dimensionality reduction. Yan et al. proposed a general formulation about graph embedding framework can exploit new dimensionality reduction algorithms (Yan et al., 2007). If only directly use some feature learning models to extract the good representation from original data, it often eventually couldn’t get a good outcome. Considering this situation, we try to choose some excellent feature learning models and combine them with some deep learning algorithms. MFA is one special formulation of the graph embedding models based on this framework. It utilizes an intrinsic graph to characterize the intraclass compactness, and another penalty graph to characterize the interclass separability. Our motivation is to combine the advantage of MFA and deep architectures and propose a new initialization method for deep learning algorithms.
There are also have some excellent works about feature learning models combined the deep architectures (Yuan et al.; George et al., 2014; Ngiam et al., 2011). Yuan et al. proposed an improved multilayer learning model to solve the scene recognition task (Yuan et al.). This model overcome the limitation of shallow, one-layer representations for scene recognition. Trigeorgis et al proposed deep Semi-NMF, that is able to learn such hidden representations from different, unknown attributes of a given dataset (George et al., 2014). Ngiam proposed a deep architectures to learn features over multiple modalities (Ngiam et al., 2011). They showed that multi-modality feature learning is better than one modality and achieved good performance on video and audio datasets. However, in general, we can only obtain data from one modality. In this work, we combine the advantages of MFA and deep architectures, which based on stacked feature learning models (Zheng et al., 2014; 2015), then we use some deep learning tricks, like back propagation, denoising and dropout to fine tuning the network. The advantage of this deep architecture is that we can learn the desirable weight matrix even if the training data is not large enough. And compared with traditional deep learning models and shallow feature learning models, our MDA achieved state-of-the-art results in most cases.
3 MARGINAL DEEP ARCHITECTURES (MDA)
In this section, we firstly introduce a novel framework of deep architectures, then we introduce marginal Fisher analysis (MFA) and the proposed marginal deep architectures (MDA) in detail. In addition, we also present some deep learning tricks that we used in the MDA model, including back propagation, denoising and dropout.
3.1 A NOVEL FRAMEWORK OF DEEP ARCHITECTURES
The feature learning problem is generally formulated as follow. Given n data, {xT1 , . . . ,xTn} ∈ <D, where D is the dimensionality of the data space, we seeks the compact representations of these data, i.e., {yT1 , . . . ,yTn } ∈ <d, where d is the dimensionality of the low dimensional embeddings. In order to improve the accuracy of shallow feature learning models, we use stacked feature learning models to construct the deep architectures (Zheng et al., 2014; 2015), which is a general framework for different applications. In this case, the mapping of data from the original D-dimensional space to the resulted d-dimensional space can be described as
D =⇒ D1 =⇒ · · · =⇒ Di =⇒ · · · =⇒ Dp−1 =⇒ d, (1) where D1 is the first higher dimensional space, the number of the node is twice or quadruple as the input layer. Di represents the dimensionality of the i-th intermediate representation space, and p is the total steps of mappings. Here, we can use different feature learning models for the learning of each layer. As the feature learning models are optimized layer by layer, we can obtain the mapping functions between successive layers. The first hidden layer is random by Wr1, and the representation is, a1 = g(WTr1x+ b) (2) where, g(.) is a non-linear activation or transfer function. Then, we can use some feature learning models to initialize the next layers. The representations of next hidden layers are,
ak = g(WTFk−1a k−1 + b) (3)
where, WFk−1 is the weight matrix of the k − 1th layer learned from feature learning models.
3.2 MARGINAL FISHER ANALYSIS (MFA)
Based on our novel framework of deep architecture, we introduce Marginal Fisher Analysis (MFA) to build MDA. Here, many traditional feature learning models, such as linear discriminant analysis
(LDA), can be used as building blocks of MDA. Take LDA as an example. It assumes that the data of each class follow a Gaussian distribution. However, this assumption is not often satisfied in the real world. Without this assumption, LDA can not work well to separate the data with nonlinear structure. Alternatively, MFA can solve this problem effectively. Hence, considering the learning capability, we choose MFA as the build blocks of MDA in our work. MFA used the graph embedding framework to set up an intrinsic graph that characterizes the intraclass compactness and another penalty graph which characterizes the interclass separability. The marginal Fisher criterion is defined as
W∗ = argmin W tr(WTX(D−A)XTW) tr(WTX(Dp −Ap)XTW)
(4)
where D and Dp are diagonal matrices with elements Dii = ∑ j Aij , and D p ij = ∑ j A p ij , respectively. Then we can learn the projection matrix to multiply PCA’s projection and marginal Fisher projection,
WMFA = WPCAW ∗ (5)
3.3 MARGINAL DEEP ARCHITECTURES (MDA)
In order to combine the advantages of MFA and proposed deep architectures, we propose the marginal deep architectures (or MDA). The MDA inherited from the proposed novel framework of deep architectures is shown in Fig. 1. As an input vector x ∈ [0, 1]d, we first map it to higher dimensional space by a random weight matrix Wr1. The representation of first hidden layer is computed as
a1 = s(WTr1x+ b) (6)
where, s(.) is the sigmoid function s(x) = 11+e−x , b is the bias terms, a 1 is the output of first layer. From second layer to (n − 1)-th layer, we use the weight matrices learned from MFA to map layer by layer.
ak = s(WTMFAk−1a k−1 + b) (7)
The last layer is a softmax regression layer and the number of neuron is the number of category. The cost function is defined as,
J(w) = − 1 N ( N∑ i=1 K∑ j=1 I(yi = j) log exp(wTj a n−1 i )∑K l=1 exp(w T l a n−1 i ) ) (8)
where, I(x) is the indicator function, I(x) = 1 if x is true, else I(x) = 0. yi is the label corresponding to xi. Then the probability that xi is classified to j is,
p(yi = j|xi,w) = exp(wTj a n−1 i )∑K
l=1 exp(w T l a n−1 i )
(9)
Taking derivatives, one can show that the gradient is,
∇J(w) = − 1 N N∑ i=1 [xi(I(yi = j)− p(yi = j|xi,w)] (10)
If the n−1 layer’s neurons are more than the last layer, we can continue using MFA to map it. On the contrary, If the n − 1 layer’s neurons are less than last layer, we can randomly initialize the weight matrix between this two layers. Next, in order to improve the MDA, we introduce back propagation, denoising and dropout operation.
3.4 BACK PROPAGATION
In order to adjust the network, we use back propagation (Rumelhart et al., 1986) to compute partial derivative and stochastic gradient descent to update the weight matrixes and the bias terms. For each node i in output layer (n-th layer), we compute an error term as
δni = ∇J(w) (11)
where, J(w) is the cost function computed from Equ.8 and ∇J(w) computed from Equ.10. For each node i in (n− 1)-th to second layer, the error term is computed as,
δki = ( k+1∑ j=1 wkjiδ k+1 j )s ′(zki ) (12)
The back propagation procedure relies on computing the gradient of an objective function with respect to the weights of a multilayer stacked modules. It starting from the output at the top and end to the input at the bottom.
3.5 DENOISING OPERATION
Vincent et al. proposed the denoising autoencoder to improve the robustness of autoencoder (Vincent et al., 2008). It’s very similar with the regularization methods and avoids the “overfitting” problem. The basic idea is to corrupt partial input data by the desired proportion of ν “destruction”. for each input x, a fixed number νd of components are chosen at random, and their value is forced to 0, while the others are left untouched. The initial input x to get a partially destroyed version x̃ by means of a stochastic mapping, x̃ ∼ qD(x̃|x) (13) where, qD(x̃|x) is the unknown distribution. Then, for a hidden representation h,
h = s(WT x̃+ b) (14)
In our MDA, we use this idea to improve the network, please refer to Fig. 1 to find clear sight. For the input layer, the output of first hidden layer is represented as,
a2 = s(WTr1 x̃+ b1) (15)
where, Wr1 is the first layer random weight matrix, b1 is the bias term of first layer. The “denoising” operation is established to a hypothetical additional specific criterion: robustness to partial destruction of the input, which means a good intermediate representation is learned from unknown distribution of its observed input. This operation helps for learning more stable structure and avoids the overfitting problem in most cases.
3.6 DROPOUT
As the same reason with denoising operation, dropout is a trick to prevent overfitting (Hinton et al., 2012b). When a large feedforward neural network is trained on a small training set, dropout performed well on test set. In order to prevent the complex co-adaptations on the training data, the basic idea of dropout is that each hidden node is randomly omitted from the network with a probability of β, so a hidden node can’t rely on other hidden node. In another view, dropout is as a very efficient way of performing model averaging with neural networks. On test set, we train many separate networks and then to apply each of these networks to the test data. Dropout operation can save the train time and then we average the predictions produced by a very large number of different networks. Fig. 1 shows the dropout operation in our MDA.
4 EXPERIMENTS
4.1 DATESET DESCRIPTIONS
We evaluate the performance of MDA on five benchmark data sets. The detail of the data is showed in Tab 1. The USPS 1 data set is a handwritten digits image data set includes 7291 training samples and 2007 test samples from 10 classes with 256 dimensional features. This task is to recognize the digits 0 to 9. The Isolet 2 data set is a collection of audio feature vectors of spoken letters from the English alphabet. It includes 6238 training samples and 1559 test samples from 26 classes with 614 dimensional features. The task is to identify which letter is spoken based on the recorded
1http://www.gaussianprocess.org/gpml/data/ 2http://archive.ics.uci.edu/ml/datasets/ISOLET
(and pre-processed) audio signal. Sensor 3 is a sensorless drive diagnosis data set includes 46816 training samples and 11693 test samples from 11 classes with 48 dimensional features. The features are extracted from electric current drive signals. The task is to classify 11 different classes with different conditions of the drive which has intact and defective components. Covertype 4 contains geological and map-based data from four wilderness areas located in the Roosevelt National Forest of northern Colorado. It includes 15120 training samples and 565892 test samples from 7 classes with 54 dimensional features. The task is to identify forest cover type from cartographic variables. For the IbnSina 5 ancient Arabic document data set, we use 50 pages of the manuscript for training (17543 training samples) and 10 pages for testing (3125 test samples). The data samples belong to 174 classes of subwords and are of dimensionality 200.
In addition, we also use a large scale dataset CIFAR-10 6 to test our MDA on large scale applications. The CIFAR-10 dataset consists of 60000 32 × 32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. We also test our MDA on a specific task which use the CMU motion capture (CMU mocap) data set 7. The CMU mocap data set includes three categories, namely, jumping, running and walking. We choose 49 video sequences from four subjects. For each sequence, the features are generated using Lawrences method 8, with dimensionality 93 (Zhong et al., 2010). By reason of the few samples of CMU, we adopt 10-fold cross-validation in our experiments and use the average error rate and standard deviation to evaluate the performance.
4.2 CLASSIFICATION ON FIVE BENCHMARK DATA SETS
4.2.1 BASELINE METHODS
In order to evaluate the performance of MDA, we compared our MDA with 5 deep learning models include autoencoder (AE) (Hinton & Salakhutdinov, 2006), stacked autoencoders, denoising autoencoders (Vincent et al., 2008), stacked denoising autoencoders (Vincent et al., 2010) and stacked denoising autoencoders with dropout, 2 feature learning models, MFA (Zhong et al., 2013; Yan et al., 2007) and PCA (Jolliffe, 2002), PCA deep architecture base on our uniform framework and the classification accuracy on original space.
4.2.2 EXPERIMENTAL SETTINGS
All of the deep learning methods have the same settings. The size of minibatch was set to 100, the learning rate and momentum were the default value 1 and 0.5, the number of epoch was set to 400, the dropout rate and denoising rate ν were set to 0.1. For the AE and SAE, weight penalty of the L2 norm was set to 10−4. For MFA, the number of nearest neighbors for constructing the intrinsic graph was set to 5, while that for constructing the penalty graph was set to 20. The target spaces of MFA and PCA on different data sets were showed in Tab 1. For the USPS data set, The architecture was set to 256 − 512 − 256 − 128 − 64 − 32. For the Isolet data set ,the architecture was set to 617 − 1324 − 617 − 308. For the Sensor data set, the architecture was set to 48 − 96 − 48 − 24.
3http://archive.ics.uci.edu/ml/datasets/Dataset+for+Sensorless+Drive+Diagnosis# 4http://archive.ics.uci.edu/ml/datasets/Covertype 5http://www.causality.inf.ethz.ch/al data/IBN SINA.html 6http://www.cs.toronto.edu/ kriz/cifar.html 7http://http://mocap.cs.cmu.edu/ 8http://is6.cs.man.ac.uk/∼neill/mocap/
For the Covertype data set, we set the architecture to 54− 216− 108− 54− 27. Finally, for Ibnsina data set, the architecture was set to 200− 400− 200− 100.
4.2.3 CLASSIFICATION RESULTS
The experimental results are shown in Tab. 2. We can see that our MDA achieves the best results on four dataset except the Sensor dataset, but MDA achieves the second best result on Sensor data set and only below the PDA. The PDA achieves the best result on Sensor data set and the second best results on other data sets. These results demonstrate that our uniform deep architectures achieve the good performance in most case. In addition, MDA not only outperform the traditional deep learning models, but also the shallow feature learning models. It shows that our deep architectures based on stacked some feature learning models can learn the better feature than shallow feature learning models.
4.3 EVALUATION
4.3.1 DIFFERENT STRUCTURES FOR MDA
In order to evaluate the desired structures of MDA, we changed the node’s number of the second layer. For USPS data set, we get rid of the second layer and the architecture was 256−128−64−32. Then, we set the number of node of the second layer was as twice as the input layer, the architecture was 256 − 128 − 64 − 32. Next, the number of node was as quadruple as the input layer, the architecture was 256− 1024− 512− 256− 128− 64− 32. Finally, the node’s number is as octuple as the input layer, the architecture was 256 − 2048 − 1024 − 512 − 256 − 128 − 64 − 32. The structures of other data sets are shown in Tab. 3.
The experimental results are shown in Tab. 4. When the the number of nodes of the second layer is as twice as the input layer, MDA achieved the minimum classification error on all data sets except the Covertype data set. When the number of nodes of the second layer is as quadruple as the input
layer, MDA get the worst result on Covertype data set. We can conclude that MDA can work well when the number of nodes of the second layer is as twice or quadruple as the input layer.
4.3.2 DIFFERENT NUMBER OF HIDDEN LAYERS FOR MDA
In order to evaluate how many hidden layers adapt to different datasets, we designed some experiments which have different number of hidden layers. We used 1 ∼ 7 hidden layers on USPS and Isolet datasets and 1 ∼ 5 hidden layers on Covertype, Sensor and Ibnsina datasets. The experimental settings were same as previous experiments.
Tab. 5 shows the classification error on 5 datasets with different hidden layers. All the datasets achieved the best results when hidden layer’s number is 3 except USPS dataset. The USPS dataset achieved the best result when hidden layer’s number is 5. As 1 ∼ 3 hidden layers, with the increase of the number of layers, the classification error is decreasing on all datasets. As small and middle scale applications, we don’t need very deep architectures to handle it. As large scale applications, we can design deeper architectures to achieve better performance.
4.4 CLASSIFICATION ON LARGE SCALE DATASET CIFAR-10
The previous section introduced the advantages of MDA on small and middle scale applications. In order to evaluate the universality of MDA, we chose a relatively large scale dataset CIFAR-10 to test the performance of MDA.
In our experiments, we first transformed the color images to gray images in order to reduce the dimensionality of input. Then we took one sample as a 1024 dimensional vector which is the input of our MDA. So, we can call this data set gray-CIFAR10. The architecture was set to 1024−2048− 1024−512−256−128−64, the minibatch’s size was set to 100, the dropout ratio and denoising ratio were set to 0.1, the number of epoch was set to 400, the learning rate was set to 1, the momentum was set to 0.5. We compared our MDA with previous 6 methods.
Table. 6(a) shows the classification error on gray-CIFAR10, we can see that PDA and MDA achieved the best results in these 7 methods. However, all of the methods on this framework didn’t perform well because we use the gray operation.
4.5 CLASSIFICATION ON CMU MOCAP DATA SET
CMU mocap data set is a very small dataset that only has 49 samples. Traditional deep learning methods didn’t work well in these kind of applications. We test our MDA and PDA and compared them with other 5 deep learning models. The architectures for all deep models (except the PDA) were set to 93− 186− 93− 47− 24. Specially, since the CMU mocap data set only has 49 samples, the PCA method only reduce the dimensionality to 49 at most, so the architecture of PDA was set to
93 − 186 − 24. The denoising ratio and dropout ratio were set to 0.1 on DAE, DAE with dropout, SDAE, SAE, PDA and MDA. The weight penalty on AE was set to 10−4. The learning rate was set to 0.01, the momentum was set to 0.5 and the number of epoch is set to 600. The experiment was test on 10-fold cross validation. The experimental results are shown in Tab. 6(b).
In Tab. 6(b), our PDA and MDA achieved the best results in this dataset and have lower standard deviation than other deep learning models. It demonstrates that our PDA and MDA are more stable than other deep learning models. The traditional autoencoder, SDAE, DAE with dropout achieved the same result in this dataset and better than SAE and DAE.
5 CONCLUSION
In this paper, we proposed a novel deep learning framework that based on stacked some feature learning models to handle small or middle data sets. Then we introduce MFA in this framework, called MDA. The deep learning tricks like backpropagation, denoising and dropout operation are applied on MDA to improve its performance. Extensive experiments on 7 different type data sets demonstrate that MDA performs not only better than shallow feature learning models, but also stateof-the-art deep learning models on small and middle scale applications. The evaluation of MDA show that how to adjust the parameters make the MDA work well. For future work, we plan to try other feature learning models and explore the different structures for this novel deep learning model. In addition, we plan to explore new deep architectures based on this framework to handle the large scale datasets. | 1. What is the main contribution of the paper regarding deep neural networks?
2. What are the strengths and weaknesses of the proposed method?
3. How does the reviewer assess the novelty and significance of the paper's content?
4. What kind of baselines or comparisons would the reviewer suggest to enhance the paper's validity?
5. Are there any concerns regarding the computational cost of the proposed method? | Review | Review
This paper proposes to initialize the weights of a deep neural network layer-wise with a marginal Fisher analysis model, making use of potentially the similarity metric.
Pros:
There are a lot of experiments, albeit small datasets, that the authors tested their proposed method on.
Cons:
lacking baseline such as discriminatively trained convolutional network on standard dataset such as CIFAR-10.
It is also unclear how costly in computation to compute the association matrix A in equation 4.
This is an OK paper, where a new idea is proposed, and combined with other existing ideas such as greedy-layerwise stacking, dropout, and denoising auto-encoders.
However, there have been many papers with similar ideas perhaps 3-5 years ago, e.g. SPCANet.
Therefore, the main novelty is the use of marginal Fisher Analysis as a new layer. This would be ok, but the baselines to demonstrate that this approach works better is missing. In particular, I'd like to see a conv net or fully connected net trained from scratch with good initialization would do at these problems.
To improve the paper, the authors should try to demonstrate without doubt that initializing layers with MFA is better than just random weight matrices. |
ICLR | Title
Private GANs, Revisited
Abstract
We show that with improved training, the standard approach for differentially private GANs – updating the discriminator with noisy gradients – achieves or competes with state-of-the-art results for private image synthesis. Existing instantiations of this approach neglect to consider how adding noise only to discriminator updates disrupts the careful balance between generator and discriminator necessary for successful GAN training. We show that a simple fix – taking more discriminator steps between generator steps – restores parity and improves training. Furthermore, with the goal of restoring parity between the generator and discriminator, we experiment with further modifications to improve discriminator training and see further improvements in generation quality. For MNIST at ε = 10, our private GANs improve the record FID from 48.4 to 13.0, and record downstream classifier accuracy from 83.2% to 95.0%.
N/A
We show that with improved training, the standard approach for differentially private GANs – updating the discriminator with noisy gradients – achieves or competes with state-of-the-art results for private image synthesis. Existing instantiations of this approach neglect to consider how adding noise only to discriminator updates disrupts the careful balance between generator and discriminator necessary for successful GAN training. We show that a simple fix – taking more discriminator steps between generator steps – restores parity and improves training. Furthermore, with the goal of restoring parity between the generator and discriminator, we experiment with further modifications to improve discriminator training and see further improvements in generation quality. For MNIST at ε = 10, our private GANs improve the record FID from 48.4 to 13.0, and record downstream classifier accuracy from 83.2% to 95.0%.
1 INTRODUCTION
Differential privacy (DP) (Dwork et al., 2006b) has emerged as a compelling approach for training machine learning models on sensitive data. However, incorporating DP requires significant changes to the training process. Notably, it prevents the modeller from working directly with private data, complicating debugging and exploration. Furthermore, the modeller can no longer interact with a private dataset after exhausting their allocated privacy budget. One approach to alleviate these issues is by producing differentially private synthetic data, which can be plugged directly into existing machine learning pipelines, without further concern for privacy.
A recent line of work studies leveraging deep generative models to produce DP synthetic data. Early efforts focused on privatizing generative adversarial networks (GANs) (Goodfellow et al., 2014) by using differentially private stochastic gradient descent (DPSGD) (Abadi et al., 2016) to update the GAN discriminator – an approach referred to as DPGAN (Xie et al., 2018; Beaulieu-Jones et al., 2019; Torkzadehmahani et al., 2019).
However, follow-up work has significantly departed from this baseline DPGAN approach, either in terms of: (a) the privatization scheme, in favor of approaches based on subsample-and-aggregate which divide the data into ≥ 1000 disjoint partitions and train teacher discriminators separately on each one (Jordon et al., 2019; Long et al., 2021; Chen et al., 2020; Wang et al., 2021); or (b) the generative modelling framework altogether, opting instead to minimize notions of statistical distance between real and generated data, such as maximum mean discrepancy (Harder et al., 2021; Vinaroz et al., 2022), or Sinkhorn divergences (Cao et al., 2021).
For labelled image synthesis, these custom generative models designed specifically for privacy fall short of GANs when evaluated at their non-private limits (ε → ∞), suggesting limited scalability to larger, higher-resolution datasets.1 On the other hand, the literature corroborates that under modest privacy budgets, these departures from the baseline DPGAN lead to significant improvements in generation quality. Proposed explanations attribute these results to inherent limitations of the DPGAN framework, suggesting that either: (a) privatizing discriminator training is sufficient for privacy, but may be overkill when only the generator needs to be released (Long et al., 2021); or (b) adversarial objectives may be unsuited for training under privacy (Cao et al., 2021).
1For example, the record FID for MNIST at ε = 10 is 48.4 (Cao et al., 2021). When evaluated at ε = ∞, their method achieves an FID of 43.4. Our non-private GANs obtain an FID of 3.2.
Our contributions. We demonstrate that the reported poor results of DPGANs should not be attributed to inherent limitations of the framework, but rather, training issues. Specifically, we propose that the asymmetric noise addition in DPGANs (adding noise to discriminator updates only) weakens the discriminator relative to the generator, disrupting the careful balance necessary for successful GAN training. We propose that taking more discriminator steps between generator updates addresses the imbalance introduced by noise. With this change, DPGANs improve significantly (see Figure 1), going from non-competitive to achieving or competing with state-of-the-art results in private image synthesis.
Furthermore, we show this perspective on private GAN training (“restoring parity to a discriminator weakened by DP noise”) can be applied to improve training. We make other modifications to discriminator training – large batch sizes and adaptive discriminator step frequency – to further improve upon the aforementioned results.
In summary, we make the following contributions:
1. We find that taking more discriminator steps between generator steps significantly improves DPGANs. Contrary to the previous results in the literature, DPGANs do compete with state-of-the-art generative modelling approaches designed with privacy in mind.
2. We present empirical findings towards understanding why more frequent discriminator steps help. We propose an explanation based on asymmetric noise addition for why vanilla DPGANs do not perform well, and why taking more steps helps.
3. We put our explanation to the test. We employ it as a principle for designing better private GAN training recipes, and indeed are able to improve over the aforementioned results.
2 PRELIMINARIES
Our goal is to train a generative model on sensitive data that is safe to release, i.e., it does not leak the secrets of individuals in the training dataset. We do this by ensuring the training algorithm A – which takes as input the sensitive dataset D ∈ U and returns the parameters of a trained (generative) model θ ∈ Θ – satisfies differential privacy. Definition 1 (Differential Privacy (Dwork et al., 2006b)). A randomized algorithm A : U → Θ is (ε, δ)-differentially private if for every pair of neighbouring datasets D,D′ ∈ U , we have
P{A(D) ∈ S} ≤ exp(ε) · P{A(D′) ∈ S}+ δ for all S ⊆ Θ.
In this work, we adopt the add/remove definition of DP, and say two datasets D and D′ are neighbouring if they differ in at most one entry, that is, D = D′ ∪ {x} or D′ = D ∪ {x}.
Algorithm 1 TrainDPGAN(D; ·) 1: Input: Labelled dataset D = {(xj , yj)}nj=1. Discriminator D and generator G initializations ϕ0 and
θ0. Optimizers OptD, OptG. Privacy parameter δ. Hyperparameters: nD (D steps per G step), T (total number of D steps), B (expected batch size), C (clipping norm), σ (noise multiplier). 2: q ← B/|D| and t, k ← 0 ▷ Calculate sampling rate q, initialize counters. 3: while t < T do ▷ Update D with DPSGD. 4: St ∼ PoissonSample(D, q) ▷ Sample a real batch St by including each (x, y) ∈ D w.p. q. 5: S̃t ∼ G(·; θk)B ▷ Sample fake batch S̃t. 6: gϕt ← ∑ (x,y)∈St clip (∇ϕt(− log(D(x, y;ϕt)));C)
+ ∑
(x̃,ỹ)∈S̃t clip (∇ϕt(− log(1−D(x̃, ỹ;ϕt)));C) ▷ Clip per-example gradients. 7: ĝϕt ← 12B (gϕt + zt), where zt ∼ N (0, C
2σ2I)) ▷ Add Gaussian noise. 8: ϕt+1 ← OptD(ϕt, ĝθt) and t← t+ 1 9: if nD divides t then ▷ Perform G update every nD steps.
10: S̃′t ∼ G(·; θk)B 11: gθk ← 1 B ∑ (x̃,ỹ)∈S̃′t
∇θk (− log(D(x̃, ỹ;ϕt))) 12: θk+1 ← OptG(θk, gθk ) and k ← k + 1 13: end if 14: end while 15: ε← PrivacyAccountant(T, σ, q, δ) ▷ Compute privacy budget spent. 16: Output: Final G parameters θk. (ε, δ)-DP guarantee.
We highlight one convenient property of DP, known as closure under post-processing. This says that interacting with a privatized model (e.g., using it to compute gradients on non-sensitive data, generate samples) does not lead to any further privacy violation.
Proposition 2 (Post-processing). Let A : U → Θ be a randomized algorithm that is (ε, δ)-DP, and f : Θ→ Y be an arbitrarily randomized mapping. Then f ◦ A : U → Y is (ε, δ)-DP.
DPSGD. A gradient-based training algorithm can be privatized by employing differentially private stochastic gradient descent (DPSGD) (Song et al., 2013; Bassily et al., 2014; Abadi et al., 2016) as a drop-in replacement for SGD. DPSGD involves clipping per-example gradients and adding Gaussian noise to their sum, which effectively bounds and masks the contribution of any individual point to the final model parameters. Privacy analysis of DPSGD follows from several classic tools in the DP toolbox: Gaussian mechanism, privacy amplification by subsampling, and composition (Dwork et al., 2006a; Dwork & Roth, 2014; Abadi et al., 2016; Wang et al., 2019). Our work employs the DPSGD analysis of Mironov et al. (2019) implemented in Opacus (Yousefpour et al., 2021).
DPGANs. Algorithm 1 details the training algorithm for DPGANs, which is effectively an instantiation of DPSGD. Note that only gradients for the discriminator D must be privatized (via clipping and noise), and not those for the generator G. This is a consequence of post-processing (Proposition 2) – the generator only interacts with the sensitive dataset indirectly via discriminator parameters, and therefore does not need further privatization.
3 FREQUENT DISCRIMINATOR STEPS IMPROVES PRIVATE GANS
In this section, we discuss our main finding: the number of discriminator steps taken between each generator step (nD from Algorithm 1) plays a significant role in the success of private GAN training. For a fixed setting of DPSGD hyperparameters, there is an optimal range of values for nD that maximizes generation quality, in terms of both visual quality and utility for downstream classifier training. This value is often quite large (nD ≈ 100 in some cases).
3.1 EXPERIMENTAL DETAILS
Setup. We focus on labelled generation of MNIST (LeCun et al., 1998) and FashionMNIST (Xiao et al., 2017), both of which are comprised of 60000 28×28 grayscale images divided into 10 classes. To build a strong baseline, we begin from an open source PyTorch (Paszke et al., 2019) implemen-
tation2 of DCGAN (Radford et al., 2016) that performs well non-privately, and copy their training recipe. We then adapt their architecture to our purposes: removing BatchNorm layers (which are not compatible with DPSGD) and adding label embedding layers to enable labelled generation. Training this configuration non-privately yields labelled generation that achieves FID scores of 3.2 on MNIST and 15.9 on FashionMNIST. Finally, we note that these models are not small: D and G have 1.72M and 2.27M trainable parameters respectively. Please see Appendix B.1 for more details.
Privacy implementation. To privatize training, we use Opacus (Yousefpour et al., 2021) which implements per-example gradient computation and the RDP accounting of Mironov et al. (2019). For our baseline setting, we use the following DPSGD hyperparameters: we keep the non-private (expected) batch size B = 128, and use a noise scale σ = 1 and clipping norm C = 1. Under these settings, we have the budget for T = 450000 discriminator steps when targeting (10, 10−5)-DP.
Evaluation. We evaluate our generative models by examining the visual quality and utility for downstream tasks of generated images. Following prior work, we measure visual quality by computing the Fréchet Inception Distance (FID) (Heusel et al., 2017) between 60000 generated images and entire test set.3 To measure downstream task utility, we again follow prior work, and train a CNN classifier on 60000 generated image-label pairs and report its accuracy on the real test set.
3.2 RESULTS
More frequent discriminator steps improves generation. We plot in Figures 1a and 2 the evolution of FID and downstream accuracy during DPGAN training for both MNIST and FashionMNIST, under varying discriminator update frequencies nD. The effect of this parameter has outsized impact on the final results. For MNIST, nD = 50 yields the best results; on FashionMNIST, the best FID is obtained at nD = 200 and the best accuracy at nD = 100.
Private GANs are on a path to mode collapse. For the MNIST results in Figures 1a and 2a, we observe that at low discriminator update frequencies (nD = 10), the best FID and accuracy scores occur early in training, well before the privacy budget we are targeting is exhausted.4 In fact, at 50000 discriminator steps (ε ≈ 2.85), nD = 10 has better FID (30.6) and accuracy (83.3%) than other settings of nD. However, these results deteriorate with continued training. In Figure 3, we
2Courtesy of Hyeonwoo Kang (https://github.com/znxlwm). Code available at this link. 3We use an open source PyTorch implementation to compute FID: https://github.com/ mseitzer/pytorch-fid. 4This observation has been reported in (Neunhoeffer et al., 2021), serving as motivation for their remedy of taking a mixture of intermediate models encountered in training. We are not aware of any mentions of this aspect of DPGAN training in papers reporting DPGAN baselines for labelled image synthesis.
plot the evolution of generated images for this nD = 10 run over the course of training, and observe qualitative evidence of mode collapse, co-occurring with the deterioration in FID and accuracy.
An optimal discriminator update frequency. These results suggest that fixing other DPSGD hyperparameters, there is an optimal setting for the discriminator step frequency nD that strikes a balance between: (1) being too low, causing the generation quality to peak early in training and then undergo mode collapse; resulting in all subsequent training to consume additional privacy budget without improving the model; and (2) being too high, preventing the generator from taking enough steps to converge before the privacy budget is exhausted (an example of this is the nD = 200 run in Figure 2a). Striking this balance results in the most effective utilization of privacy budget towards improving the generator.
4 WHY DOES TAKING MORE STEPS HELP?
In this section, we present empirical findings towards understanding why more frequent discriminator steps improves DPGAN training. We propose an explanation that is conistent with our findings.
How does DP affect GAN training? Figure 4 compares the accuracy of the GAN discriminator (on held-out real and fake examples) immediately before each generator step between non-private training and private training with different settings of nD. We observe that non-privately, discriminator accuracy stays around 60% throughout training. Naively introducing DP (nD = 1) leads to a qualitative difference: DP causes discriminator accuracy to drop to 50% immediately at the start of training, and never recovers.5
For other settings of nD, we make three observations: (1) larger nD corresponds to higher accuracy; (2) the generator improves during the periods in which the discriminator stays above 50% accuracy; and (3) accuracy decreases throughout training as the generator improves, and degradation/stagnation of the generator (as observed in Figure 3) co-occurs with discriminator accuracy dropping to 50%.
Based on these observations, we propose the following explanation for why more steps help:
• Generator improvement occurs when the discriminator is capable of distinguishing between real and fake data.
• The asymmetric noise addition introduced by DP to the discriminator makes such a task difficult, resulting in limited generator improvement.
• Allowing the discriminator to train longer on a fixed generator improves its accuracy, recovering the non-private case where the generator and discriminator are balanced.
Does reducing noise accomplish the same thing? In light of this explanation, we ask if reducing the noise level σ can offer the same improvement as taking more steps, as reducing σ should also improve discriminator accuracy before a generator step. To test this: starting from our setting in Section 3, fixing nD = 1, and targeting MNIST at ε = 10, we search over a grid of noise levels
5Our plot only shows the first 20000 generator steps, but we remark that this persists until the end of training (450000 steps).
σ = {0.4, 0.43, 0.45, 0.5, 0.55, 0.6, 0.7, 0.8}; the lowest of which, σ = 0.4, admits a budget of only T = 360 discriminator steps. We obtain a best FID of 127.1 and best accuracy of 57.5% at noise level σ = 0.45. Hence we can conclude that in this setting, incorporating discriminator update frequency in our design space allows for more effective use of privacy budget for improving the discriminator, and in turn, generation quality.
Does taking more discriminator steps always help? As we discuss in more detail in Section 5.1, when we are able to find other means to improve the discriminator beyond taking more steps, tuning discriminator update frequency may not yield improvements. To illustrate with an extreme case, consider eliminating the privacy constraint. In non-private GAN training, taking more steps is known to be unnecessary. We corroborate this result: we run our non-private baseline from Section 3 with the same number of generator steps, but opt to take 10 discriminator steps between each generator step instead of 1. FID worsens from 3.2→ 8.3, and accuracy worsens from 96.8%→ 91.3%.
5 BETTER GENERATORS VIA BETTER DISCRIMINATORS
Our proposed explanation in Section 4 provides a concrete suggestion for improving GAN training: effectively use our privacy budget to maximize the number of generator steps taken when the discriminator has sufficiently high accuracy. We experiment with modifications to the private GAN training recipe towards these ends, which translate to improved generation.
5.1 LARGER BATCH SIZES
Several recent works have demonstrated that for classification tasks, DPSGD achieves higher accuracy with larger batch sizes, after tuning the noise scale σ accordingly (Tramèr & Boneh, 2021; Anil et al., 2021; De et al., 2022). GAN training is typically conducted with small batch sizes (for example, DCGAN uses B = 128, which we adopt; StyleGAN uses B = 32). Therefore it is interesting to see if large batch sizes indeed improve private GAN training. We corroborate that larger batch sizes do not significantly improve our non-private MNIST baseline from Section 3: when we go up to B = 2048 from B = 128, FID stays at 3.2 and accuracy improves from 96.8%→ 97.5%.
Results. We scale up batch sizes, considering B ∈ {64, 128, 512, 2048}, and search for the optimal noise scale σ and nD (details in Appendix B.2). We target both ε = 1 and ε = 10. We report the best results from our hyperparameter search in in Table 1. We find that larger batch sizes leads to improvements: for ε = 10, the best MNIST and FashionMNIST results are achieved at B = 2048. For ε = 1, the best results are achieved at B = 512. We also note that for large batch sizes, the optimal number of generator steps can be quite small. For B = 2048, σ = 4.0, targeting MNIST at ε = 10, nD = 5 is the optimal discriminator update frequency, and improves over our best B = 128 setting employing nD = 50.
5.2 ADAPTIVE DISCRIMINATOR STEP FREQUENCY
Our observations from Section 3 and 4 motivate us to consider adaptive discriminator step frequencies. As pictured in Figure 4, discriminator accuracy drops during training as the generator improves. In this scenario, we want to take more steps to improve the discriminator, in order to further improve the generator. However, using a large discriminator update frequency right from the beginning of training is wasteful – as evidenced by the fact that low nD achieves the best FID and accuracy early in training. Hence we propose to start at a low discriminator update frequency (nD = 1), and ramp up when our discriminator is performing poorly.
Accuracy on real data must be released with DP. While this is feasible, it introduces the additional problem of having to find the right split of privacy budget for the best performance. We observe that discriminator accuracy is related to discriminator accuracy on fake samples only (which are free to evaluate on, by post-processing). Hence we use it as a proxy to assess discriminator performance.
The adaptive step frequency is parameterized by two terms, β and d. β is the decay parameter used to compute the exponential moving average (EMA) of discriminator accuracy on fake batches before each generator update. We use β = 0.99 in all settings. d is the accuracy floor that upon reaching, we move to the next update frequency nD ∈ {1, 2, 5, 10, 20, 50, 100, 200, 500}. We try d = 0.6 and d = 0.7, finding that 0.7 works better for large batches. Additionally, we promise a grace period of 2/(1 − β) = 200 generator steps before moving on to the next update frequency. This formula is motivated by the fact that β-EMA’s value is primarily determined by its last 2/(1−β) observations. The additional benefit of the adaptive step frequency is that it means we do not have to search for the optimal update frequency. Although the adaptive step frequency introduces the extra hyperparameter of the threshold d, we found that these two settings (d = 0.6 and d = 0.7) were sufficient to improve over results of a much more extensive hyperparameter search.
5.3 COMPARISON WITH PREVIOUS RESULTS IN THE LITERATURE
5.3.1 MNIST AND FASHIONMNIST
Table 1 summarizes our best experimental settings for MNIST and FashionMNIST, and situates them in the context of previously reported results for the task. We provide some example generated images in Figures 7 and 8 for ε = 10, and Figures 9 and 10 for ε = 1.
Simple DPSGD beats all alternative GAN privatization schemes. Our baseline DPGAN from Section 3, with the appropriate choice of nD (and without the modifications described in this section yet), outperforms all other GAN-based approaches proposed in the literature (GS-WGAN, PATEGAN, G-PATE, and DataLens) uniformly across both metrics, both datasets, and both privacy levels.
Large batch sizes and adaptive step schedules improve GAN training. Broadly speaking, across both privacy levels and both datasets, we see an improvement from taking larger batch sizes, and then another with an adaptive step schedule. The magnitude of improvement varies.
Comparison with state-of-the-art. In the low privacy/high ε regime, most of our results are dramatically better than prior work6 – for example, decreasing FID from 48.4 to 13.0 and increasing accuracy from 83.2% to 95.0% on MNIST. In the high privacy/low ε regime, improvements are not quite as extreme, but can still be significant (FID for MNIST and FashionMNIST), and only compare negatively to state-of-the-art for accuracy on FashionMNIST. Visual comparison for ε = 10 results in 5
5.3.2 CELEBA-GENDER
We also report results on generating 32 × 32 CelebA, conditioned on gender at (10, 10−6)-DP. For these experiments, we used slightly larger models (2.64M and 3.16M parameters for D and G
6We do not compare with two recent works on private generative models (Chen et al., 2022; Jiang et al., 2022), as we believe there are gaps in their privacy analyses. This has been confirmed by the authors of Jiang et al. (2022), and the sketch of an argument regarding non-privacy of Chen et al. (2022) has been shared with us by others (Anonymous, 2022).
respectively), and employed large batches (B = 1024) and adaptive discriminator step frequency with threshold d = 0.6. Results are summarized in Table 2, example images are in Figure 11.
6 DISCUSSION AND RELATED WORK
DP generative models. The baseline DPGAN that employs a DPSGD-trained discriminator was introduced by Xie et al. (2018), and was subsequently studied in several works (Torkzadehmahani et al., 2019; Beaulieu-Jones et al., 2019). Despite significant interest in the approach and numerous applications to various problems (≈ 300 citations as of November 2022), we were unable to find studies that explore the modifications we perform or uncover similar principles for improving training. Perhaps as a consequence, subsequent work has departed from this approach, examining alternative privatization schemes for GANS (Jordon et al., 2019; Long et al., 2021; Chen et al., 2020;
7We group per-class unconditional GANs together with conditional GANs under the DPGAN umbrella. 8These results are presented graphically in the paper. Exact numbers can be found in their code.
Wang et al., 2021). Contrary to their claims, our work shows that these privatization schemes do not outperform DPSGD. Other generative modelling frameworks have been applied to DP synthetic data including VAEs (Chen et al., 2018), maximum mean discrepancy (Harder et al., 2021; Vinaroz et al., 2022), Sinkhorn divergences (Cao et al., 2021), and normalizing flows (Waites & Cummings, 2021). We show that a well-tuned DPGAN competes with or outperforms these approaches. Custom approaches versus a well-tuned DPSGD. An ongoing debate pertains to the best techniques and architectures for private ML. Roughly speaking, there are two schools of thought. One investigates novel architectures for privacy, which may be outperformed by more traditional approaches in the non-private setting. Some examples include Chen et al. (2018); Cao et al. (2021); Vinaroz et al. (2022), a variety of generative models specifically designed to be compatible with differential privacy. The other focuses on searching within the space of tried-and-tested methods that are understood to work well non-privately. Some examples include the works of De et al. (2022); Li et al. (2022), who demonstrate that, similar to the non-private setting, large-scale CNN and Transformer architectures can achieve state-of-the-art results for image classification and NLP tasks. The primary modifications to the pipeline are along the lines of changing the batch size, modifying the type of normalization layers, etc., most of which would be explored in a proper hyperparameter search in the non-private setting. Our work fits into the latter line: we show that novel generative models introduced for privacy can be outperformed by GANs trained with well-tuned DPSGD.
Tabular data. Our investigation focused on image datasets, while many important applications of private data generation involve tabular data. While Tao et al. (2021) find that private GAN-based approaches fail to preserve even basic statistics in these settings, we believe that our techniques may yield similar improvements.
7 CONCLUSION
Our most important contribution is to show that private GANs have been underrated by the research community, and can achieve state-of-the-art results with careful tuning. We hope and anticipate this will inspire the community to revisit private GANs, and quickly improve upon our results.
A GENERATED SAMPLES
We provide a few non-cherrypicked samples for MNIST and FashionMNIST at ε = 10 and ε = 1, as well as 32× 32 CelebA-Gender at ε = 10.
B IMPLEMENTATION DETAILS
B.1 MNIST AND FASHIONMNIST TRAINING RECIPE
For MNIST and FashionMNIST, we begin from an open source PyTorch implementation of DCGAN (Radford et al., 2016) (available at this link) that performs well non-privately, and copy their
training recipe. This includes: batch size B = 128, the Adam optimizer (Kingma & Ba, 2015) with parameters (α = 0.0002, β1 = 0.5, β2 = 0.999) for both G and D, the non-saturating GAN loss (Goodfellow et al., 2014), and a 5-layer fully convolutional architecture with width parameter d = 128.
To adapt it to our purposes, we make three architectural modifications: in both G and D we (1) remove all BatchNorm layers (which are not compatible with DPSGD); (2) add label embedding layers to enable labelled generation; and (3) adjust convolutional/transpose convolutional stride lengths and kernel sizes as well as remove the last layer, in order to process 1× 28× 28 images without having to resize. Finally, we remove their custom weight initialization, opting for PyTorch defaults.
Our baseline non-private GANs are trained for 45000 steps. We train our non-private GANs with poisson sampling as well: for each step of discriminator training, we sample real examples by including each element of our dataset independently with probability B/n, where n is the size of our dataset. We then add B fake examples sampled from G to form our fake/real combined batch.
B.2 LARGE BATCH SIZE HYPERPARAMETER SEARCH
We scale up batch sizes, considering B ∈ {64, 128, 512, 2048}, and search for the optimal noise scale σ and nD. For B = 128 targeting ε = 10, we search over three noise scales, Σε=10B=128 = {0.6, 1.0, 1.4}. We choose candidate noise scales for other batch sizes as follows: when considering a batch size 128k, we search over Σε=10B=128k := { √ k · σ : σ ∈ Σε=10B=128}. We also target the high privacy (ε = 1) regime. For ε = 1, we multiply all noise scales by 5, Σε=1B = {5σ : σ ∈ Σε=10B }. We search over a grid nD ∈ {1, 2, 5, 10, 20, 50, 100, 200, 500}. Due to compute limitations, we omit some values that we are confident will fail (e.g., trying nD = 1 when mode collapse occurs for nD = 5).
C ADDITIONAL DISCUSSION
GANhacks. Guidance in the non-private setting (tip 14 of Chintala et al. (2016)) prescribes to train the discriminator for more steps in the presence of noise (a regularization approach used in non-private GANs). This is the case for DP, and is our core strategy that yields the most significant gains in utility. We were not aware of this tip when we discovered this phenomenon, but it serves as validation of our finding. While Chintala et al. (2016) provides little elaboration, looking at further explorations of this principle in the non-private setting may offer guidance for improving DPGANs. | 1. What is the focus of the paper regarding DP-GANs?
2. What are the strengths of the proposed techniques, particularly in improving generation quality?
3. What are the weaknesses of the paper, including limitations in the proposed method and experiments?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions for future research or improvements to the proposed method? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a simple strategy to improve DP-GANs by using more update steps for discriminator, adopting a step scheduler, and taking a larger batch size. The authors provide empirical findings to justify such improvement by linking generation quality with the discriminator accuracy though the training process. Experiments on MNIST and FashionMNIST show very promising results in terms of generation quality.
Strengths And Weaknesses
Strength:
The proposed techniques are very effective to achieve much better FID on MNIST and FashionMNIST.
The empirical explanation of balancing the discriminator and maximizing generator steps taken when discriminator accuracy is high, is interesting and inspirational.
Weaknesses:
Overall, I think the proposed method is currently limited to an engineering technique, which might not be mature enough. However, I do believe it is highly potential to develop a principled method with high impact along the current direction. For example, I would recommend further elaborate on the step scheduler to be more adaptive and include more ablation studies. The authors might be able to draw some inspirations from StyleGAN2-ADA for such paradigm.
The improvement of utility accuracy is not as significant as FID. It would also be better to include an experiment on some larger datasets such as CelebA.
The writing and the organization could be improved.
Although fancy, the title is not very informative.
Figure 1 and figure 2 share the same purpose.
Section 3.3 is duplicate to the previous content.
Section 5.1 and 5.2 are not well aligned with the theme of Section 5 and the results are not well linked to the discriminator accuracy.
In Algorithm 1, some symbols are used without definition.
In Table 1, "(This work)" is ambiguous.
Clarity, Quality, Novelty And Reproducibility
The clarity and quality are good in general with some issues to be improved. This paper is novel in terms of empirical findings, while the proposed method is currently limited to an engineering technique. The reproducibility is good with sufficient implementation details provided. |
ICLR | Title
Private GANs, Revisited
Abstract
We show that with improved training, the standard approach for differentially private GANs – updating the discriminator with noisy gradients – achieves or competes with state-of-the-art results for private image synthesis. Existing instantiations of this approach neglect to consider how adding noise only to discriminator updates disrupts the careful balance between generator and discriminator necessary for successful GAN training. We show that a simple fix – taking more discriminator steps between generator steps – restores parity and improves training. Furthermore, with the goal of restoring parity between the generator and discriminator, we experiment with further modifications to improve discriminator training and see further improvements in generation quality. For MNIST at ε = 10, our private GANs improve the record FID from 48.4 to 13.0, and record downstream classifier accuracy from 83.2% to 95.0%.
N/A
We show that with improved training, the standard approach for differentially private GANs – updating the discriminator with noisy gradients – achieves or competes with state-of-the-art results for private image synthesis. Existing instantiations of this approach neglect to consider how adding noise only to discriminator updates disrupts the careful balance between generator and discriminator necessary for successful GAN training. We show that a simple fix – taking more discriminator steps between generator steps – restores parity and improves training. Furthermore, with the goal of restoring parity between the generator and discriminator, we experiment with further modifications to improve discriminator training and see further improvements in generation quality. For MNIST at ε = 10, our private GANs improve the record FID from 48.4 to 13.0, and record downstream classifier accuracy from 83.2% to 95.0%.
1 INTRODUCTION
Differential privacy (DP) (Dwork et al., 2006b) has emerged as a compelling approach for training machine learning models on sensitive data. However, incorporating DP requires significant changes to the training process. Notably, it prevents the modeller from working directly with private data, complicating debugging and exploration. Furthermore, the modeller can no longer interact with a private dataset after exhausting their allocated privacy budget. One approach to alleviate these issues is by producing differentially private synthetic data, which can be plugged directly into existing machine learning pipelines, without further concern for privacy.
A recent line of work studies leveraging deep generative models to produce DP synthetic data. Early efforts focused on privatizing generative adversarial networks (GANs) (Goodfellow et al., 2014) by using differentially private stochastic gradient descent (DPSGD) (Abadi et al., 2016) to update the GAN discriminator – an approach referred to as DPGAN (Xie et al., 2018; Beaulieu-Jones et al., 2019; Torkzadehmahani et al., 2019).
However, follow-up work has significantly departed from this baseline DPGAN approach, either in terms of: (a) the privatization scheme, in favor of approaches based on subsample-and-aggregate which divide the data into ≥ 1000 disjoint partitions and train teacher discriminators separately on each one (Jordon et al., 2019; Long et al., 2021; Chen et al., 2020; Wang et al., 2021); or (b) the generative modelling framework altogether, opting instead to minimize notions of statistical distance between real and generated data, such as maximum mean discrepancy (Harder et al., 2021; Vinaroz et al., 2022), or Sinkhorn divergences (Cao et al., 2021).
For labelled image synthesis, these custom generative models designed specifically for privacy fall short of GANs when evaluated at their non-private limits (ε → ∞), suggesting limited scalability to larger, higher-resolution datasets.1 On the other hand, the literature corroborates that under modest privacy budgets, these departures from the baseline DPGAN lead to significant improvements in generation quality. Proposed explanations attribute these results to inherent limitations of the DPGAN framework, suggesting that either: (a) privatizing discriminator training is sufficient for privacy, but may be overkill when only the generator needs to be released (Long et al., 2021); or (b) adversarial objectives may be unsuited for training under privacy (Cao et al., 2021).
1For example, the record FID for MNIST at ε = 10 is 48.4 (Cao et al., 2021). When evaluated at ε = ∞, their method achieves an FID of 43.4. Our non-private GANs obtain an FID of 3.2.
Our contributions. We demonstrate that the reported poor results of DPGANs should not be attributed to inherent limitations of the framework, but rather, training issues. Specifically, we propose that the asymmetric noise addition in DPGANs (adding noise to discriminator updates only) weakens the discriminator relative to the generator, disrupting the careful balance necessary for successful GAN training. We propose that taking more discriminator steps between generator updates addresses the imbalance introduced by noise. With this change, DPGANs improve significantly (see Figure 1), going from non-competitive to achieving or competing with state-of-the-art results in private image synthesis.
Furthermore, we show this perspective on private GAN training (“restoring parity to a discriminator weakened by DP noise”) can be applied to improve training. We make other modifications to discriminator training – large batch sizes and adaptive discriminator step frequency – to further improve upon the aforementioned results.
In summary, we make the following contributions:
1. We find that taking more discriminator steps between generator steps significantly improves DPGANs. Contrary to the previous results in the literature, DPGANs do compete with state-of-the-art generative modelling approaches designed with privacy in mind.
2. We present empirical findings towards understanding why more frequent discriminator steps help. We propose an explanation based on asymmetric noise addition for why vanilla DPGANs do not perform well, and why taking more steps helps.
3. We put our explanation to the test. We employ it as a principle for designing better private GAN training recipes, and indeed are able to improve over the aforementioned results.
2 PRELIMINARIES
Our goal is to train a generative model on sensitive data that is safe to release, i.e., it does not leak the secrets of individuals in the training dataset. We do this by ensuring the training algorithm A – which takes as input the sensitive dataset D ∈ U and returns the parameters of a trained (generative) model θ ∈ Θ – satisfies differential privacy. Definition 1 (Differential Privacy (Dwork et al., 2006b)). A randomized algorithm A : U → Θ is (ε, δ)-differentially private if for every pair of neighbouring datasets D,D′ ∈ U , we have
P{A(D) ∈ S} ≤ exp(ε) · P{A(D′) ∈ S}+ δ for all S ⊆ Θ.
In this work, we adopt the add/remove definition of DP, and say two datasets D and D′ are neighbouring if they differ in at most one entry, that is, D = D′ ∪ {x} or D′ = D ∪ {x}.
Algorithm 1 TrainDPGAN(D; ·) 1: Input: Labelled dataset D = {(xj , yj)}nj=1. Discriminator D and generator G initializations ϕ0 and
θ0. Optimizers OptD, OptG. Privacy parameter δ. Hyperparameters: nD (D steps per G step), T (total number of D steps), B (expected batch size), C (clipping norm), σ (noise multiplier). 2: q ← B/|D| and t, k ← 0 ▷ Calculate sampling rate q, initialize counters. 3: while t < T do ▷ Update D with DPSGD. 4: St ∼ PoissonSample(D, q) ▷ Sample a real batch St by including each (x, y) ∈ D w.p. q. 5: S̃t ∼ G(·; θk)B ▷ Sample fake batch S̃t. 6: gϕt ← ∑ (x,y)∈St clip (∇ϕt(− log(D(x, y;ϕt)));C)
+ ∑
(x̃,ỹ)∈S̃t clip (∇ϕt(− log(1−D(x̃, ỹ;ϕt)));C) ▷ Clip per-example gradients. 7: ĝϕt ← 12B (gϕt + zt), where zt ∼ N (0, C
2σ2I)) ▷ Add Gaussian noise. 8: ϕt+1 ← OptD(ϕt, ĝθt) and t← t+ 1 9: if nD divides t then ▷ Perform G update every nD steps.
10: S̃′t ∼ G(·; θk)B 11: gθk ← 1 B ∑ (x̃,ỹ)∈S̃′t
∇θk (− log(D(x̃, ỹ;ϕt))) 12: θk+1 ← OptG(θk, gθk ) and k ← k + 1 13: end if 14: end while 15: ε← PrivacyAccountant(T, σ, q, δ) ▷ Compute privacy budget spent. 16: Output: Final G parameters θk. (ε, δ)-DP guarantee.
We highlight one convenient property of DP, known as closure under post-processing. This says that interacting with a privatized model (e.g., using it to compute gradients on non-sensitive data, generate samples) does not lead to any further privacy violation.
Proposition 2 (Post-processing). Let A : U → Θ be a randomized algorithm that is (ε, δ)-DP, and f : Θ→ Y be an arbitrarily randomized mapping. Then f ◦ A : U → Y is (ε, δ)-DP.
DPSGD. A gradient-based training algorithm can be privatized by employing differentially private stochastic gradient descent (DPSGD) (Song et al., 2013; Bassily et al., 2014; Abadi et al., 2016) as a drop-in replacement for SGD. DPSGD involves clipping per-example gradients and adding Gaussian noise to their sum, which effectively bounds and masks the contribution of any individual point to the final model parameters. Privacy analysis of DPSGD follows from several classic tools in the DP toolbox: Gaussian mechanism, privacy amplification by subsampling, and composition (Dwork et al., 2006a; Dwork & Roth, 2014; Abadi et al., 2016; Wang et al., 2019). Our work employs the DPSGD analysis of Mironov et al. (2019) implemented in Opacus (Yousefpour et al., 2021).
DPGANs. Algorithm 1 details the training algorithm for DPGANs, which is effectively an instantiation of DPSGD. Note that only gradients for the discriminator D must be privatized (via clipping and noise), and not those for the generator G. This is a consequence of post-processing (Proposition 2) – the generator only interacts with the sensitive dataset indirectly via discriminator parameters, and therefore does not need further privatization.
3 FREQUENT DISCRIMINATOR STEPS IMPROVES PRIVATE GANS
In this section, we discuss our main finding: the number of discriminator steps taken between each generator step (nD from Algorithm 1) plays a significant role in the success of private GAN training. For a fixed setting of DPSGD hyperparameters, there is an optimal range of values for nD that maximizes generation quality, in terms of both visual quality and utility for downstream classifier training. This value is often quite large (nD ≈ 100 in some cases).
3.1 EXPERIMENTAL DETAILS
Setup. We focus on labelled generation of MNIST (LeCun et al., 1998) and FashionMNIST (Xiao et al., 2017), both of which are comprised of 60000 28×28 grayscale images divided into 10 classes. To build a strong baseline, we begin from an open source PyTorch (Paszke et al., 2019) implemen-
tation2 of DCGAN (Radford et al., 2016) that performs well non-privately, and copy their training recipe. We then adapt their architecture to our purposes: removing BatchNorm layers (which are not compatible with DPSGD) and adding label embedding layers to enable labelled generation. Training this configuration non-privately yields labelled generation that achieves FID scores of 3.2 on MNIST and 15.9 on FashionMNIST. Finally, we note that these models are not small: D and G have 1.72M and 2.27M trainable parameters respectively. Please see Appendix B.1 for more details.
Privacy implementation. To privatize training, we use Opacus (Yousefpour et al., 2021) which implements per-example gradient computation and the RDP accounting of Mironov et al. (2019). For our baseline setting, we use the following DPSGD hyperparameters: we keep the non-private (expected) batch size B = 128, and use a noise scale σ = 1 and clipping norm C = 1. Under these settings, we have the budget for T = 450000 discriminator steps when targeting (10, 10−5)-DP.
Evaluation. We evaluate our generative models by examining the visual quality and utility for downstream tasks of generated images. Following prior work, we measure visual quality by computing the Fréchet Inception Distance (FID) (Heusel et al., 2017) between 60000 generated images and entire test set.3 To measure downstream task utility, we again follow prior work, and train a CNN classifier on 60000 generated image-label pairs and report its accuracy on the real test set.
3.2 RESULTS
More frequent discriminator steps improves generation. We plot in Figures 1a and 2 the evolution of FID and downstream accuracy during DPGAN training for both MNIST and FashionMNIST, under varying discriminator update frequencies nD. The effect of this parameter has outsized impact on the final results. For MNIST, nD = 50 yields the best results; on FashionMNIST, the best FID is obtained at nD = 200 and the best accuracy at nD = 100.
Private GANs are on a path to mode collapse. For the MNIST results in Figures 1a and 2a, we observe that at low discriminator update frequencies (nD = 10), the best FID and accuracy scores occur early in training, well before the privacy budget we are targeting is exhausted.4 In fact, at 50000 discriminator steps (ε ≈ 2.85), nD = 10 has better FID (30.6) and accuracy (83.3%) than other settings of nD. However, these results deteriorate with continued training. In Figure 3, we
2Courtesy of Hyeonwoo Kang (https://github.com/znxlwm). Code available at this link. 3We use an open source PyTorch implementation to compute FID: https://github.com/ mseitzer/pytorch-fid. 4This observation has been reported in (Neunhoeffer et al., 2021), serving as motivation for their remedy of taking a mixture of intermediate models encountered in training. We are not aware of any mentions of this aspect of DPGAN training in papers reporting DPGAN baselines for labelled image synthesis.
plot the evolution of generated images for this nD = 10 run over the course of training, and observe qualitative evidence of mode collapse, co-occurring with the deterioration in FID and accuracy.
An optimal discriminator update frequency. These results suggest that fixing other DPSGD hyperparameters, there is an optimal setting for the discriminator step frequency nD that strikes a balance between: (1) being too low, causing the generation quality to peak early in training and then undergo mode collapse; resulting in all subsequent training to consume additional privacy budget without improving the model; and (2) being too high, preventing the generator from taking enough steps to converge before the privacy budget is exhausted (an example of this is the nD = 200 run in Figure 2a). Striking this balance results in the most effective utilization of privacy budget towards improving the generator.
4 WHY DOES TAKING MORE STEPS HELP?
In this section, we present empirical findings towards understanding why more frequent discriminator steps improves DPGAN training. We propose an explanation that is conistent with our findings.
How does DP affect GAN training? Figure 4 compares the accuracy of the GAN discriminator (on held-out real and fake examples) immediately before each generator step between non-private training and private training with different settings of nD. We observe that non-privately, discriminator accuracy stays around 60% throughout training. Naively introducing DP (nD = 1) leads to a qualitative difference: DP causes discriminator accuracy to drop to 50% immediately at the start of training, and never recovers.5
For other settings of nD, we make three observations: (1) larger nD corresponds to higher accuracy; (2) the generator improves during the periods in which the discriminator stays above 50% accuracy; and (3) accuracy decreases throughout training as the generator improves, and degradation/stagnation of the generator (as observed in Figure 3) co-occurs with discriminator accuracy dropping to 50%.
Based on these observations, we propose the following explanation for why more steps help:
• Generator improvement occurs when the discriminator is capable of distinguishing between real and fake data.
• The asymmetric noise addition introduced by DP to the discriminator makes such a task difficult, resulting in limited generator improvement.
• Allowing the discriminator to train longer on a fixed generator improves its accuracy, recovering the non-private case where the generator and discriminator are balanced.
Does reducing noise accomplish the same thing? In light of this explanation, we ask if reducing the noise level σ can offer the same improvement as taking more steps, as reducing σ should also improve discriminator accuracy before a generator step. To test this: starting from our setting in Section 3, fixing nD = 1, and targeting MNIST at ε = 10, we search over a grid of noise levels
5Our plot only shows the first 20000 generator steps, but we remark that this persists until the end of training (450000 steps).
σ = {0.4, 0.43, 0.45, 0.5, 0.55, 0.6, 0.7, 0.8}; the lowest of which, σ = 0.4, admits a budget of only T = 360 discriminator steps. We obtain a best FID of 127.1 and best accuracy of 57.5% at noise level σ = 0.45. Hence we can conclude that in this setting, incorporating discriminator update frequency in our design space allows for more effective use of privacy budget for improving the discriminator, and in turn, generation quality.
Does taking more discriminator steps always help? As we discuss in more detail in Section 5.1, when we are able to find other means to improve the discriminator beyond taking more steps, tuning discriminator update frequency may not yield improvements. To illustrate with an extreme case, consider eliminating the privacy constraint. In non-private GAN training, taking more steps is known to be unnecessary. We corroborate this result: we run our non-private baseline from Section 3 with the same number of generator steps, but opt to take 10 discriminator steps between each generator step instead of 1. FID worsens from 3.2→ 8.3, and accuracy worsens from 96.8%→ 91.3%.
5 BETTER GENERATORS VIA BETTER DISCRIMINATORS
Our proposed explanation in Section 4 provides a concrete suggestion for improving GAN training: effectively use our privacy budget to maximize the number of generator steps taken when the discriminator has sufficiently high accuracy. We experiment with modifications to the private GAN training recipe towards these ends, which translate to improved generation.
5.1 LARGER BATCH SIZES
Several recent works have demonstrated that for classification tasks, DPSGD achieves higher accuracy with larger batch sizes, after tuning the noise scale σ accordingly (Tramèr & Boneh, 2021; Anil et al., 2021; De et al., 2022). GAN training is typically conducted with small batch sizes (for example, DCGAN uses B = 128, which we adopt; StyleGAN uses B = 32). Therefore it is interesting to see if large batch sizes indeed improve private GAN training. We corroborate that larger batch sizes do not significantly improve our non-private MNIST baseline from Section 3: when we go up to B = 2048 from B = 128, FID stays at 3.2 and accuracy improves from 96.8%→ 97.5%.
Results. We scale up batch sizes, considering B ∈ {64, 128, 512, 2048}, and search for the optimal noise scale σ and nD (details in Appendix B.2). We target both ε = 1 and ε = 10. We report the best results from our hyperparameter search in in Table 1. We find that larger batch sizes leads to improvements: for ε = 10, the best MNIST and FashionMNIST results are achieved at B = 2048. For ε = 1, the best results are achieved at B = 512. We also note that for large batch sizes, the optimal number of generator steps can be quite small. For B = 2048, σ = 4.0, targeting MNIST at ε = 10, nD = 5 is the optimal discriminator update frequency, and improves over our best B = 128 setting employing nD = 50.
5.2 ADAPTIVE DISCRIMINATOR STEP FREQUENCY
Our observations from Section 3 and 4 motivate us to consider adaptive discriminator step frequencies. As pictured in Figure 4, discriminator accuracy drops during training as the generator improves. In this scenario, we want to take more steps to improve the discriminator, in order to further improve the generator. However, using a large discriminator update frequency right from the beginning of training is wasteful – as evidenced by the fact that low nD achieves the best FID and accuracy early in training. Hence we propose to start at a low discriminator update frequency (nD = 1), and ramp up when our discriminator is performing poorly.
Accuracy on real data must be released with DP. While this is feasible, it introduces the additional problem of having to find the right split of privacy budget for the best performance. We observe that discriminator accuracy is related to discriminator accuracy on fake samples only (which are free to evaluate on, by post-processing). Hence we use it as a proxy to assess discriminator performance.
The adaptive step frequency is parameterized by two terms, β and d. β is the decay parameter used to compute the exponential moving average (EMA) of discriminator accuracy on fake batches before each generator update. We use β = 0.99 in all settings. d is the accuracy floor that upon reaching, we move to the next update frequency nD ∈ {1, 2, 5, 10, 20, 50, 100, 200, 500}. We try d = 0.6 and d = 0.7, finding that 0.7 works better for large batches. Additionally, we promise a grace period of 2/(1 − β) = 200 generator steps before moving on to the next update frequency. This formula is motivated by the fact that β-EMA’s value is primarily determined by its last 2/(1−β) observations. The additional benefit of the adaptive step frequency is that it means we do not have to search for the optimal update frequency. Although the adaptive step frequency introduces the extra hyperparameter of the threshold d, we found that these two settings (d = 0.6 and d = 0.7) were sufficient to improve over results of a much more extensive hyperparameter search.
5.3 COMPARISON WITH PREVIOUS RESULTS IN THE LITERATURE
5.3.1 MNIST AND FASHIONMNIST
Table 1 summarizes our best experimental settings for MNIST and FashionMNIST, and situates them in the context of previously reported results for the task. We provide some example generated images in Figures 7 and 8 for ε = 10, and Figures 9 and 10 for ε = 1.
Simple DPSGD beats all alternative GAN privatization schemes. Our baseline DPGAN from Section 3, with the appropriate choice of nD (and without the modifications described in this section yet), outperforms all other GAN-based approaches proposed in the literature (GS-WGAN, PATEGAN, G-PATE, and DataLens) uniformly across both metrics, both datasets, and both privacy levels.
Large batch sizes and adaptive step schedules improve GAN training. Broadly speaking, across both privacy levels and both datasets, we see an improvement from taking larger batch sizes, and then another with an adaptive step schedule. The magnitude of improvement varies.
Comparison with state-of-the-art. In the low privacy/high ε regime, most of our results are dramatically better than prior work6 – for example, decreasing FID from 48.4 to 13.0 and increasing accuracy from 83.2% to 95.0% on MNIST. In the high privacy/low ε regime, improvements are not quite as extreme, but can still be significant (FID for MNIST and FashionMNIST), and only compare negatively to state-of-the-art for accuracy on FashionMNIST. Visual comparison for ε = 10 results in 5
5.3.2 CELEBA-GENDER
We also report results on generating 32 × 32 CelebA, conditioned on gender at (10, 10−6)-DP. For these experiments, we used slightly larger models (2.64M and 3.16M parameters for D and G
6We do not compare with two recent works on private generative models (Chen et al., 2022; Jiang et al., 2022), as we believe there are gaps in their privacy analyses. This has been confirmed by the authors of Jiang et al. (2022), and the sketch of an argument regarding non-privacy of Chen et al. (2022) has been shared with us by others (Anonymous, 2022).
respectively), and employed large batches (B = 1024) and adaptive discriminator step frequency with threshold d = 0.6. Results are summarized in Table 2, example images are in Figure 11.
6 DISCUSSION AND RELATED WORK
DP generative models. The baseline DPGAN that employs a DPSGD-trained discriminator was introduced by Xie et al. (2018), and was subsequently studied in several works (Torkzadehmahani et al., 2019; Beaulieu-Jones et al., 2019). Despite significant interest in the approach and numerous applications to various problems (≈ 300 citations as of November 2022), we were unable to find studies that explore the modifications we perform or uncover similar principles for improving training. Perhaps as a consequence, subsequent work has departed from this approach, examining alternative privatization schemes for GANS (Jordon et al., 2019; Long et al., 2021; Chen et al., 2020;
7We group per-class unconditional GANs together with conditional GANs under the DPGAN umbrella. 8These results are presented graphically in the paper. Exact numbers can be found in their code.
Wang et al., 2021). Contrary to their claims, our work shows that these privatization schemes do not outperform DPSGD. Other generative modelling frameworks have been applied to DP synthetic data including VAEs (Chen et al., 2018), maximum mean discrepancy (Harder et al., 2021; Vinaroz et al., 2022), Sinkhorn divergences (Cao et al., 2021), and normalizing flows (Waites & Cummings, 2021). We show that a well-tuned DPGAN competes with or outperforms these approaches. Custom approaches versus a well-tuned DPSGD. An ongoing debate pertains to the best techniques and architectures for private ML. Roughly speaking, there are two schools of thought. One investigates novel architectures for privacy, which may be outperformed by more traditional approaches in the non-private setting. Some examples include Chen et al. (2018); Cao et al. (2021); Vinaroz et al. (2022), a variety of generative models specifically designed to be compatible with differential privacy. The other focuses on searching within the space of tried-and-tested methods that are understood to work well non-privately. Some examples include the works of De et al. (2022); Li et al. (2022), who demonstrate that, similar to the non-private setting, large-scale CNN and Transformer architectures can achieve state-of-the-art results for image classification and NLP tasks. The primary modifications to the pipeline are along the lines of changing the batch size, modifying the type of normalization layers, etc., most of which would be explored in a proper hyperparameter search in the non-private setting. Our work fits into the latter line: we show that novel generative models introduced for privacy can be outperformed by GANs trained with well-tuned DPSGD.
Tabular data. Our investigation focused on image datasets, while many important applications of private data generation involve tabular data. While Tao et al. (2021) find that private GAN-based approaches fail to preserve even basic statistics in these settings, we believe that our techniques may yield similar improvements.
7 CONCLUSION
Our most important contribution is to show that private GANs have been underrated by the research community, and can achieve state-of-the-art results with careful tuning. We hope and anticipate this will inspire the community to revisit private GANs, and quickly improve upon our results.
A GENERATED SAMPLES
We provide a few non-cherrypicked samples for MNIST and FashionMNIST at ε = 10 and ε = 1, as well as 32× 32 CelebA-Gender at ε = 10.
B IMPLEMENTATION DETAILS
B.1 MNIST AND FASHIONMNIST TRAINING RECIPE
For MNIST and FashionMNIST, we begin from an open source PyTorch implementation of DCGAN (Radford et al., 2016) (available at this link) that performs well non-privately, and copy their
training recipe. This includes: batch size B = 128, the Adam optimizer (Kingma & Ba, 2015) with parameters (α = 0.0002, β1 = 0.5, β2 = 0.999) for both G and D, the non-saturating GAN loss (Goodfellow et al., 2014), and a 5-layer fully convolutional architecture with width parameter d = 128.
To adapt it to our purposes, we make three architectural modifications: in both G and D we (1) remove all BatchNorm layers (which are not compatible with DPSGD); (2) add label embedding layers to enable labelled generation; and (3) adjust convolutional/transpose convolutional stride lengths and kernel sizes as well as remove the last layer, in order to process 1× 28× 28 images without having to resize. Finally, we remove their custom weight initialization, opting for PyTorch defaults.
Our baseline non-private GANs are trained for 45000 steps. We train our non-private GANs with poisson sampling as well: for each step of discriminator training, we sample real examples by including each element of our dataset independently with probability B/n, where n is the size of our dataset. We then add B fake examples sampled from G to form our fake/real combined batch.
B.2 LARGE BATCH SIZE HYPERPARAMETER SEARCH
We scale up batch sizes, considering B ∈ {64, 128, 512, 2048}, and search for the optimal noise scale σ and nD. For B = 128 targeting ε = 10, we search over three noise scales, Σε=10B=128 = {0.6, 1.0, 1.4}. We choose candidate noise scales for other batch sizes as follows: when considering a batch size 128k, we search over Σε=10B=128k := { √ k · σ : σ ∈ Σε=10B=128}. We also target the high privacy (ε = 1) regime. For ε = 1, we multiply all noise scales by 5, Σε=1B = {5σ : σ ∈ Σε=10B }. We search over a grid nD ∈ {1, 2, 5, 10, 20, 50, 100, 200, 500}. Due to compute limitations, we omit some values that we are confident will fail (e.g., trying nD = 1 when mode collapse occurs for nD = 5).
C ADDITIONAL DISCUSSION
GANhacks. Guidance in the non-private setting (tip 14 of Chintala et al. (2016)) prescribes to train the discriminator for more steps in the presence of noise (a regularization approach used in non-private GANs). This is the case for DP, and is our core strategy that yields the most significant gains in utility. We were not aware of this tip when we discovered this phenomenon, but it serves as validation of our finding. While Chintala et al. (2016) provides little elaboration, looking at further explorations of this principle in the non-private setting may offer guidance for improving DPGANs. | 1. What are the main contributions of the paper regarding differential private GANs?
2. What are the strengths of the proposed approach, particularly in terms of utilizing recent architectures and training tricks?
3. What are the weaknesses of the paper, especially regarding its novelty compared to prior works?
4. Do you have any questions or concerns about the paper's calculations and experiments, such as the calculation of (10, 10^-5)-DP and the effect of noise scales, n_D, and batch size on privacy?
5. How would you assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper provides two empirical findings on how to train differential private gans: larger batch size, and more discriminator steps. Experimental results show superior performance against existing baselines.
Strengths And Weaknesses
Strength
In the literature of differential privacy gans, it seems like the utilization of more recent architectures and training tricks (e.g. StyleGAN2,3, …) for higher quality samples are mostly unexplored. This paper provides a good point that these “tricks” could largely boost the performance of private gans.
Weakness
From my perspective, this work is with limited novelty. Since larger batch size and take more discriminator steps has been studied widely in normal gan papers. It seems like natural attempts to try these tricks on private gans.
Several missing details. For example, how does (10, 10^-5)-DP calculated from B=128,sigma=1,C=1,T=450000 (in section 3.1)? Please provide the formal equation on calculating this. In addition, noise scales, n_D, and batch size also affect privacy, in section 5.1, how can you targeting the same \epsilon with different values of these?
Clarity, Quality, Novelty And Reproducibility
Some details are missing, code is not provided. |
ICLR | Title
Private GANs, Revisited
Abstract
We show that with improved training, the standard approach for differentially private GANs – updating the discriminator with noisy gradients – achieves or competes with state-of-the-art results for private image synthesis. Existing instantiations of this approach neglect to consider how adding noise only to discriminator updates disrupts the careful balance between generator and discriminator necessary for successful GAN training. We show that a simple fix – taking more discriminator steps between generator steps – restores parity and improves training. Furthermore, with the goal of restoring parity between the generator and discriminator, we experiment with further modifications to improve discriminator training and see further improvements in generation quality. For MNIST at ε = 10, our private GANs improve the record FID from 48.4 to 13.0, and record downstream classifier accuracy from 83.2% to 95.0%.
N/A
We show that with improved training, the standard approach for differentially private GANs – updating the discriminator with noisy gradients – achieves or competes with state-of-the-art results for private image synthesis. Existing instantiations of this approach neglect to consider how adding noise only to discriminator updates disrupts the careful balance between generator and discriminator necessary for successful GAN training. We show that a simple fix – taking more discriminator steps between generator steps – restores parity and improves training. Furthermore, with the goal of restoring parity between the generator and discriminator, we experiment with further modifications to improve discriminator training and see further improvements in generation quality. For MNIST at ε = 10, our private GANs improve the record FID from 48.4 to 13.0, and record downstream classifier accuracy from 83.2% to 95.0%.
1 INTRODUCTION
Differential privacy (DP) (Dwork et al., 2006b) has emerged as a compelling approach for training machine learning models on sensitive data. However, incorporating DP requires significant changes to the training process. Notably, it prevents the modeller from working directly with private data, complicating debugging and exploration. Furthermore, the modeller can no longer interact with a private dataset after exhausting their allocated privacy budget. One approach to alleviate these issues is by producing differentially private synthetic data, which can be plugged directly into existing machine learning pipelines, without further concern for privacy.
A recent line of work studies leveraging deep generative models to produce DP synthetic data. Early efforts focused on privatizing generative adversarial networks (GANs) (Goodfellow et al., 2014) by using differentially private stochastic gradient descent (DPSGD) (Abadi et al., 2016) to update the GAN discriminator – an approach referred to as DPGAN (Xie et al., 2018; Beaulieu-Jones et al., 2019; Torkzadehmahani et al., 2019).
However, follow-up work has significantly departed from this baseline DPGAN approach, either in terms of: (a) the privatization scheme, in favor of approaches based on subsample-and-aggregate which divide the data into ≥ 1000 disjoint partitions and train teacher discriminators separately on each one (Jordon et al., 2019; Long et al., 2021; Chen et al., 2020; Wang et al., 2021); or (b) the generative modelling framework altogether, opting instead to minimize notions of statistical distance between real and generated data, such as maximum mean discrepancy (Harder et al., 2021; Vinaroz et al., 2022), or Sinkhorn divergences (Cao et al., 2021).
For labelled image synthesis, these custom generative models designed specifically for privacy fall short of GANs when evaluated at their non-private limits (ε → ∞), suggesting limited scalability to larger, higher-resolution datasets.1 On the other hand, the literature corroborates that under modest privacy budgets, these departures from the baseline DPGAN lead to significant improvements in generation quality. Proposed explanations attribute these results to inherent limitations of the DPGAN framework, suggesting that either: (a) privatizing discriminator training is sufficient for privacy, but may be overkill when only the generator needs to be released (Long et al., 2021); or (b) adversarial objectives may be unsuited for training under privacy (Cao et al., 2021).
1For example, the record FID for MNIST at ε = 10 is 48.4 (Cao et al., 2021). When evaluated at ε = ∞, their method achieves an FID of 43.4. Our non-private GANs obtain an FID of 3.2.
Our contributions. We demonstrate that the reported poor results of DPGANs should not be attributed to inherent limitations of the framework, but rather, training issues. Specifically, we propose that the asymmetric noise addition in DPGANs (adding noise to discriminator updates only) weakens the discriminator relative to the generator, disrupting the careful balance necessary for successful GAN training. We propose that taking more discriminator steps between generator updates addresses the imbalance introduced by noise. With this change, DPGANs improve significantly (see Figure 1), going from non-competitive to achieving or competing with state-of-the-art results in private image synthesis.
Furthermore, we show this perspective on private GAN training (“restoring parity to a discriminator weakened by DP noise”) can be applied to improve training. We make other modifications to discriminator training – large batch sizes and adaptive discriminator step frequency – to further improve upon the aforementioned results.
In summary, we make the following contributions:
1. We find that taking more discriminator steps between generator steps significantly improves DPGANs. Contrary to the previous results in the literature, DPGANs do compete with state-of-the-art generative modelling approaches designed with privacy in mind.
2. We present empirical findings towards understanding why more frequent discriminator steps help. We propose an explanation based on asymmetric noise addition for why vanilla DPGANs do not perform well, and why taking more steps helps.
3. We put our explanation to the test. We employ it as a principle for designing better private GAN training recipes, and indeed are able to improve over the aforementioned results.
2 PRELIMINARIES
Our goal is to train a generative model on sensitive data that is safe to release, i.e., it does not leak the secrets of individuals in the training dataset. We do this by ensuring the training algorithm A – which takes as input the sensitive dataset D ∈ U and returns the parameters of a trained (generative) model θ ∈ Θ – satisfies differential privacy. Definition 1 (Differential Privacy (Dwork et al., 2006b)). A randomized algorithm A : U → Θ is (ε, δ)-differentially private if for every pair of neighbouring datasets D,D′ ∈ U , we have
P{A(D) ∈ S} ≤ exp(ε) · P{A(D′) ∈ S}+ δ for all S ⊆ Θ.
In this work, we adopt the add/remove definition of DP, and say two datasets D and D′ are neighbouring if they differ in at most one entry, that is, D = D′ ∪ {x} or D′ = D ∪ {x}.
Algorithm 1 TrainDPGAN(D; ·) 1: Input: Labelled dataset D = {(xj , yj)}nj=1. Discriminator D and generator G initializations ϕ0 and
θ0. Optimizers OptD, OptG. Privacy parameter δ. Hyperparameters: nD (D steps per G step), T (total number of D steps), B (expected batch size), C (clipping norm), σ (noise multiplier). 2: q ← B/|D| and t, k ← 0 ▷ Calculate sampling rate q, initialize counters. 3: while t < T do ▷ Update D with DPSGD. 4: St ∼ PoissonSample(D, q) ▷ Sample a real batch St by including each (x, y) ∈ D w.p. q. 5: S̃t ∼ G(·; θk)B ▷ Sample fake batch S̃t. 6: gϕt ← ∑ (x,y)∈St clip (∇ϕt(− log(D(x, y;ϕt)));C)
+ ∑
(x̃,ỹ)∈S̃t clip (∇ϕt(− log(1−D(x̃, ỹ;ϕt)));C) ▷ Clip per-example gradients. 7: ĝϕt ← 12B (gϕt + zt), where zt ∼ N (0, C
2σ2I)) ▷ Add Gaussian noise. 8: ϕt+1 ← OptD(ϕt, ĝθt) and t← t+ 1 9: if nD divides t then ▷ Perform G update every nD steps.
10: S̃′t ∼ G(·; θk)B 11: gθk ← 1 B ∑ (x̃,ỹ)∈S̃′t
∇θk (− log(D(x̃, ỹ;ϕt))) 12: θk+1 ← OptG(θk, gθk ) and k ← k + 1 13: end if 14: end while 15: ε← PrivacyAccountant(T, σ, q, δ) ▷ Compute privacy budget spent. 16: Output: Final G parameters θk. (ε, δ)-DP guarantee.
We highlight one convenient property of DP, known as closure under post-processing. This says that interacting with a privatized model (e.g., using it to compute gradients on non-sensitive data, generate samples) does not lead to any further privacy violation.
Proposition 2 (Post-processing). Let A : U → Θ be a randomized algorithm that is (ε, δ)-DP, and f : Θ→ Y be an arbitrarily randomized mapping. Then f ◦ A : U → Y is (ε, δ)-DP.
DPSGD. A gradient-based training algorithm can be privatized by employing differentially private stochastic gradient descent (DPSGD) (Song et al., 2013; Bassily et al., 2014; Abadi et al., 2016) as a drop-in replacement for SGD. DPSGD involves clipping per-example gradients and adding Gaussian noise to their sum, which effectively bounds and masks the contribution of any individual point to the final model parameters. Privacy analysis of DPSGD follows from several classic tools in the DP toolbox: Gaussian mechanism, privacy amplification by subsampling, and composition (Dwork et al., 2006a; Dwork & Roth, 2014; Abadi et al., 2016; Wang et al., 2019). Our work employs the DPSGD analysis of Mironov et al. (2019) implemented in Opacus (Yousefpour et al., 2021).
DPGANs. Algorithm 1 details the training algorithm for DPGANs, which is effectively an instantiation of DPSGD. Note that only gradients for the discriminator D must be privatized (via clipping and noise), and not those for the generator G. This is a consequence of post-processing (Proposition 2) – the generator only interacts with the sensitive dataset indirectly via discriminator parameters, and therefore does not need further privatization.
3 FREQUENT DISCRIMINATOR STEPS IMPROVES PRIVATE GANS
In this section, we discuss our main finding: the number of discriminator steps taken between each generator step (nD from Algorithm 1) plays a significant role in the success of private GAN training. For a fixed setting of DPSGD hyperparameters, there is an optimal range of values for nD that maximizes generation quality, in terms of both visual quality and utility for downstream classifier training. This value is often quite large (nD ≈ 100 in some cases).
3.1 EXPERIMENTAL DETAILS
Setup. We focus on labelled generation of MNIST (LeCun et al., 1998) and FashionMNIST (Xiao et al., 2017), both of which are comprised of 60000 28×28 grayscale images divided into 10 classes. To build a strong baseline, we begin from an open source PyTorch (Paszke et al., 2019) implemen-
tation2 of DCGAN (Radford et al., 2016) that performs well non-privately, and copy their training recipe. We then adapt their architecture to our purposes: removing BatchNorm layers (which are not compatible with DPSGD) and adding label embedding layers to enable labelled generation. Training this configuration non-privately yields labelled generation that achieves FID scores of 3.2 on MNIST and 15.9 on FashionMNIST. Finally, we note that these models are not small: D and G have 1.72M and 2.27M trainable parameters respectively. Please see Appendix B.1 for more details.
Privacy implementation. To privatize training, we use Opacus (Yousefpour et al., 2021) which implements per-example gradient computation and the RDP accounting of Mironov et al. (2019). For our baseline setting, we use the following DPSGD hyperparameters: we keep the non-private (expected) batch size B = 128, and use a noise scale σ = 1 and clipping norm C = 1. Under these settings, we have the budget for T = 450000 discriminator steps when targeting (10, 10−5)-DP.
Evaluation. We evaluate our generative models by examining the visual quality and utility for downstream tasks of generated images. Following prior work, we measure visual quality by computing the Fréchet Inception Distance (FID) (Heusel et al., 2017) between 60000 generated images and entire test set.3 To measure downstream task utility, we again follow prior work, and train a CNN classifier on 60000 generated image-label pairs and report its accuracy on the real test set.
3.2 RESULTS
More frequent discriminator steps improves generation. We plot in Figures 1a and 2 the evolution of FID and downstream accuracy during DPGAN training for both MNIST and FashionMNIST, under varying discriminator update frequencies nD. The effect of this parameter has outsized impact on the final results. For MNIST, nD = 50 yields the best results; on FashionMNIST, the best FID is obtained at nD = 200 and the best accuracy at nD = 100.
Private GANs are on a path to mode collapse. For the MNIST results in Figures 1a and 2a, we observe that at low discriminator update frequencies (nD = 10), the best FID and accuracy scores occur early in training, well before the privacy budget we are targeting is exhausted.4 In fact, at 50000 discriminator steps (ε ≈ 2.85), nD = 10 has better FID (30.6) and accuracy (83.3%) than other settings of nD. However, these results deteriorate with continued training. In Figure 3, we
2Courtesy of Hyeonwoo Kang (https://github.com/znxlwm). Code available at this link. 3We use an open source PyTorch implementation to compute FID: https://github.com/ mseitzer/pytorch-fid. 4This observation has been reported in (Neunhoeffer et al., 2021), serving as motivation for their remedy of taking a mixture of intermediate models encountered in training. We are not aware of any mentions of this aspect of DPGAN training in papers reporting DPGAN baselines for labelled image synthesis.
plot the evolution of generated images for this nD = 10 run over the course of training, and observe qualitative evidence of mode collapse, co-occurring with the deterioration in FID and accuracy.
An optimal discriminator update frequency. These results suggest that fixing other DPSGD hyperparameters, there is an optimal setting for the discriminator step frequency nD that strikes a balance between: (1) being too low, causing the generation quality to peak early in training and then undergo mode collapse; resulting in all subsequent training to consume additional privacy budget without improving the model; and (2) being too high, preventing the generator from taking enough steps to converge before the privacy budget is exhausted (an example of this is the nD = 200 run in Figure 2a). Striking this balance results in the most effective utilization of privacy budget towards improving the generator.
4 WHY DOES TAKING MORE STEPS HELP?
In this section, we present empirical findings towards understanding why more frequent discriminator steps improves DPGAN training. We propose an explanation that is conistent with our findings.
How does DP affect GAN training? Figure 4 compares the accuracy of the GAN discriminator (on held-out real and fake examples) immediately before each generator step between non-private training and private training with different settings of nD. We observe that non-privately, discriminator accuracy stays around 60% throughout training. Naively introducing DP (nD = 1) leads to a qualitative difference: DP causes discriminator accuracy to drop to 50% immediately at the start of training, and never recovers.5
For other settings of nD, we make three observations: (1) larger nD corresponds to higher accuracy; (2) the generator improves during the periods in which the discriminator stays above 50% accuracy; and (3) accuracy decreases throughout training as the generator improves, and degradation/stagnation of the generator (as observed in Figure 3) co-occurs with discriminator accuracy dropping to 50%.
Based on these observations, we propose the following explanation for why more steps help:
• Generator improvement occurs when the discriminator is capable of distinguishing between real and fake data.
• The asymmetric noise addition introduced by DP to the discriminator makes such a task difficult, resulting in limited generator improvement.
• Allowing the discriminator to train longer on a fixed generator improves its accuracy, recovering the non-private case where the generator and discriminator are balanced.
Does reducing noise accomplish the same thing? In light of this explanation, we ask if reducing the noise level σ can offer the same improvement as taking more steps, as reducing σ should also improve discriminator accuracy before a generator step. To test this: starting from our setting in Section 3, fixing nD = 1, and targeting MNIST at ε = 10, we search over a grid of noise levels
5Our plot only shows the first 20000 generator steps, but we remark that this persists until the end of training (450000 steps).
σ = {0.4, 0.43, 0.45, 0.5, 0.55, 0.6, 0.7, 0.8}; the lowest of which, σ = 0.4, admits a budget of only T = 360 discriminator steps. We obtain a best FID of 127.1 and best accuracy of 57.5% at noise level σ = 0.45. Hence we can conclude that in this setting, incorporating discriminator update frequency in our design space allows for more effective use of privacy budget for improving the discriminator, and in turn, generation quality.
Does taking more discriminator steps always help? As we discuss in more detail in Section 5.1, when we are able to find other means to improve the discriminator beyond taking more steps, tuning discriminator update frequency may not yield improvements. To illustrate with an extreme case, consider eliminating the privacy constraint. In non-private GAN training, taking more steps is known to be unnecessary. We corroborate this result: we run our non-private baseline from Section 3 with the same number of generator steps, but opt to take 10 discriminator steps between each generator step instead of 1. FID worsens from 3.2→ 8.3, and accuracy worsens from 96.8%→ 91.3%.
5 BETTER GENERATORS VIA BETTER DISCRIMINATORS
Our proposed explanation in Section 4 provides a concrete suggestion for improving GAN training: effectively use our privacy budget to maximize the number of generator steps taken when the discriminator has sufficiently high accuracy. We experiment with modifications to the private GAN training recipe towards these ends, which translate to improved generation.
5.1 LARGER BATCH SIZES
Several recent works have demonstrated that for classification tasks, DPSGD achieves higher accuracy with larger batch sizes, after tuning the noise scale σ accordingly (Tramèr & Boneh, 2021; Anil et al., 2021; De et al., 2022). GAN training is typically conducted with small batch sizes (for example, DCGAN uses B = 128, which we adopt; StyleGAN uses B = 32). Therefore it is interesting to see if large batch sizes indeed improve private GAN training. We corroborate that larger batch sizes do not significantly improve our non-private MNIST baseline from Section 3: when we go up to B = 2048 from B = 128, FID stays at 3.2 and accuracy improves from 96.8%→ 97.5%.
Results. We scale up batch sizes, considering B ∈ {64, 128, 512, 2048}, and search for the optimal noise scale σ and nD (details in Appendix B.2). We target both ε = 1 and ε = 10. We report the best results from our hyperparameter search in in Table 1. We find that larger batch sizes leads to improvements: for ε = 10, the best MNIST and FashionMNIST results are achieved at B = 2048. For ε = 1, the best results are achieved at B = 512. We also note that for large batch sizes, the optimal number of generator steps can be quite small. For B = 2048, σ = 4.0, targeting MNIST at ε = 10, nD = 5 is the optimal discriminator update frequency, and improves over our best B = 128 setting employing nD = 50.
5.2 ADAPTIVE DISCRIMINATOR STEP FREQUENCY
Our observations from Section 3 and 4 motivate us to consider adaptive discriminator step frequencies. As pictured in Figure 4, discriminator accuracy drops during training as the generator improves. In this scenario, we want to take more steps to improve the discriminator, in order to further improve the generator. However, using a large discriminator update frequency right from the beginning of training is wasteful – as evidenced by the fact that low nD achieves the best FID and accuracy early in training. Hence we propose to start at a low discriminator update frequency (nD = 1), and ramp up when our discriminator is performing poorly.
Accuracy on real data must be released with DP. While this is feasible, it introduces the additional problem of having to find the right split of privacy budget for the best performance. We observe that discriminator accuracy is related to discriminator accuracy on fake samples only (which are free to evaluate on, by post-processing). Hence we use it as a proxy to assess discriminator performance.
The adaptive step frequency is parameterized by two terms, β and d. β is the decay parameter used to compute the exponential moving average (EMA) of discriminator accuracy on fake batches before each generator update. We use β = 0.99 in all settings. d is the accuracy floor that upon reaching, we move to the next update frequency nD ∈ {1, 2, 5, 10, 20, 50, 100, 200, 500}. We try d = 0.6 and d = 0.7, finding that 0.7 works better for large batches. Additionally, we promise a grace period of 2/(1 − β) = 200 generator steps before moving on to the next update frequency. This formula is motivated by the fact that β-EMA’s value is primarily determined by its last 2/(1−β) observations. The additional benefit of the adaptive step frequency is that it means we do not have to search for the optimal update frequency. Although the adaptive step frequency introduces the extra hyperparameter of the threshold d, we found that these two settings (d = 0.6 and d = 0.7) were sufficient to improve over results of a much more extensive hyperparameter search.
5.3 COMPARISON WITH PREVIOUS RESULTS IN THE LITERATURE
5.3.1 MNIST AND FASHIONMNIST
Table 1 summarizes our best experimental settings for MNIST and FashionMNIST, and situates them in the context of previously reported results for the task. We provide some example generated images in Figures 7 and 8 for ε = 10, and Figures 9 and 10 for ε = 1.
Simple DPSGD beats all alternative GAN privatization schemes. Our baseline DPGAN from Section 3, with the appropriate choice of nD (and without the modifications described in this section yet), outperforms all other GAN-based approaches proposed in the literature (GS-WGAN, PATEGAN, G-PATE, and DataLens) uniformly across both metrics, both datasets, and both privacy levels.
Large batch sizes and adaptive step schedules improve GAN training. Broadly speaking, across both privacy levels and both datasets, we see an improvement from taking larger batch sizes, and then another with an adaptive step schedule. The magnitude of improvement varies.
Comparison with state-of-the-art. In the low privacy/high ε regime, most of our results are dramatically better than prior work6 – for example, decreasing FID from 48.4 to 13.0 and increasing accuracy from 83.2% to 95.0% on MNIST. In the high privacy/low ε regime, improvements are not quite as extreme, but can still be significant (FID for MNIST and FashionMNIST), and only compare negatively to state-of-the-art for accuracy on FashionMNIST. Visual comparison for ε = 10 results in 5
5.3.2 CELEBA-GENDER
We also report results on generating 32 × 32 CelebA, conditioned on gender at (10, 10−6)-DP. For these experiments, we used slightly larger models (2.64M and 3.16M parameters for D and G
6We do not compare with two recent works on private generative models (Chen et al., 2022; Jiang et al., 2022), as we believe there are gaps in their privacy analyses. This has been confirmed by the authors of Jiang et al. (2022), and the sketch of an argument regarding non-privacy of Chen et al. (2022) has been shared with us by others (Anonymous, 2022).
respectively), and employed large batches (B = 1024) and adaptive discriminator step frequency with threshold d = 0.6. Results are summarized in Table 2, example images are in Figure 11.
6 DISCUSSION AND RELATED WORK
DP generative models. The baseline DPGAN that employs a DPSGD-trained discriminator was introduced by Xie et al. (2018), and was subsequently studied in several works (Torkzadehmahani et al., 2019; Beaulieu-Jones et al., 2019). Despite significant interest in the approach and numerous applications to various problems (≈ 300 citations as of November 2022), we were unable to find studies that explore the modifications we perform or uncover similar principles for improving training. Perhaps as a consequence, subsequent work has departed from this approach, examining alternative privatization schemes for GANS (Jordon et al., 2019; Long et al., 2021; Chen et al., 2020;
7We group per-class unconditional GANs together with conditional GANs under the DPGAN umbrella. 8These results are presented graphically in the paper. Exact numbers can be found in their code.
Wang et al., 2021). Contrary to their claims, our work shows that these privatization schemes do not outperform DPSGD. Other generative modelling frameworks have been applied to DP synthetic data including VAEs (Chen et al., 2018), maximum mean discrepancy (Harder et al., 2021; Vinaroz et al., 2022), Sinkhorn divergences (Cao et al., 2021), and normalizing flows (Waites & Cummings, 2021). We show that a well-tuned DPGAN competes with or outperforms these approaches. Custom approaches versus a well-tuned DPSGD. An ongoing debate pertains to the best techniques and architectures for private ML. Roughly speaking, there are two schools of thought. One investigates novel architectures for privacy, which may be outperformed by more traditional approaches in the non-private setting. Some examples include Chen et al. (2018); Cao et al. (2021); Vinaroz et al. (2022), a variety of generative models specifically designed to be compatible with differential privacy. The other focuses on searching within the space of tried-and-tested methods that are understood to work well non-privately. Some examples include the works of De et al. (2022); Li et al. (2022), who demonstrate that, similar to the non-private setting, large-scale CNN and Transformer architectures can achieve state-of-the-art results for image classification and NLP tasks. The primary modifications to the pipeline are along the lines of changing the batch size, modifying the type of normalization layers, etc., most of which would be explored in a proper hyperparameter search in the non-private setting. Our work fits into the latter line: we show that novel generative models introduced for privacy can be outperformed by GANs trained with well-tuned DPSGD.
Tabular data. Our investigation focused on image datasets, while many important applications of private data generation involve tabular data. While Tao et al. (2021) find that private GAN-based approaches fail to preserve even basic statistics in these settings, we believe that our techniques may yield similar improvements.
7 CONCLUSION
Our most important contribution is to show that private GANs have been underrated by the research community, and can achieve state-of-the-art results with careful tuning. We hope and anticipate this will inspire the community to revisit private GANs, and quickly improve upon our results.
A GENERATED SAMPLES
We provide a few non-cherrypicked samples for MNIST and FashionMNIST at ε = 10 and ε = 1, as well as 32× 32 CelebA-Gender at ε = 10.
B IMPLEMENTATION DETAILS
B.1 MNIST AND FASHIONMNIST TRAINING RECIPE
For MNIST and FashionMNIST, we begin from an open source PyTorch implementation of DCGAN (Radford et al., 2016) (available at this link) that performs well non-privately, and copy their
training recipe. This includes: batch size B = 128, the Adam optimizer (Kingma & Ba, 2015) with parameters (α = 0.0002, β1 = 0.5, β2 = 0.999) for both G and D, the non-saturating GAN loss (Goodfellow et al., 2014), and a 5-layer fully convolutional architecture with width parameter d = 128.
To adapt it to our purposes, we make three architectural modifications: in both G and D we (1) remove all BatchNorm layers (which are not compatible with DPSGD); (2) add label embedding layers to enable labelled generation; and (3) adjust convolutional/transpose convolutional stride lengths and kernel sizes as well as remove the last layer, in order to process 1× 28× 28 images without having to resize. Finally, we remove their custom weight initialization, opting for PyTorch defaults.
Our baseline non-private GANs are trained for 45000 steps. We train our non-private GANs with poisson sampling as well: for each step of discriminator training, we sample real examples by including each element of our dataset independently with probability B/n, where n is the size of our dataset. We then add B fake examples sampled from G to form our fake/real combined batch.
B.2 LARGE BATCH SIZE HYPERPARAMETER SEARCH
We scale up batch sizes, considering B ∈ {64, 128, 512, 2048}, and search for the optimal noise scale σ and nD. For B = 128 targeting ε = 10, we search over three noise scales, Σε=10B=128 = {0.6, 1.0, 1.4}. We choose candidate noise scales for other batch sizes as follows: when considering a batch size 128k, we search over Σε=10B=128k := { √ k · σ : σ ∈ Σε=10B=128}. We also target the high privacy (ε = 1) regime. For ε = 1, we multiply all noise scales by 5, Σε=1B = {5σ : σ ∈ Σε=10B }. We search over a grid nD ∈ {1, 2, 5, 10, 20, 50, 100, 200, 500}. Due to compute limitations, we omit some values that we are confident will fail (e.g., trying nD = 1 when mode collapse occurs for nD = 5).
C ADDITIONAL DISCUSSION
GANhacks. Guidance in the non-private setting (tip 14 of Chintala et al. (2016)) prescribes to train the discriminator for more steps in the presence of noise (a regularization approach used in non-private GANs). This is the case for DP, and is our core strategy that yields the most significant gains in utility. We were not aware of this tip when we discovered this phenomenon, but it serves as validation of our finding. While Chintala et al. (2016) provides little elaboration, looking at further explorations of this principle in the non-private setting may offer guidance for improving DPGANs. | 1. What is the focus of the paper regarding DPGAN?
2. What are the strengths of the proposed approach, particularly in terms of improving the quality of generated data?
3. What are the weaknesses of the paper, especially regarding its novelty and performance in certain regimes?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors revisit the DPGAN paper and state that the added noise to discriminator training disrupts the balance between the generator and discriminator training. They show that by tuning the number of update steps taking by the discriminator for every generator update step, specifically, taking more steps significantly improves the results. They show that for MNIST \at epsilon = 10 their private GAN FID goes from 48.4 to 13.0 and the downstream accuracy of the classifier goes up from 83.2% to 95.0%.
Strengths And Weaknesses
Strength
The paper provides a simple yet effective method to improve the quality of the generated data of DPGAN. The experiments show significant improvement in the tested benchmarks. The authors discuss in length about whey more steps are needed in the case of DPGAN. In addition, the authors discuss the role of the batch size in DPGAN settings and the additional benefits of scheduling the discriminator frequency update (start low then increase).
Weaknesses
The novelty of the work is marginal. The method work best at \epsilon = 10, but on high privacy/ low \epsilon regime sometime the results are even worse.
Clarity, Quality, Novelty And Reproducibility
The paper is written in a clear and easy to follow fashion, the work might be important to the community although the novelty of the work is not substantial. The work could be easily be reproduce. |
ICLR | Title
RETHINKING SELF-DRIVING : MULTI -TASK KNOWLEDGE FOR BETTER GENERALIZATION AND ACCIDENT EXPLANATION ABILITY
Abstract
Current end-to-end deep learning driving models have two problems: (1) Poor generalization ability of unobserved driving environment when diversity of training driving dataset is limited (2) Lack of accident explanation ability when driving models don’t work as expected. To tackle these two problems, rooted on the believe that knowledge of associated easy task is benificial for addressing difficult task, we proposed a new driving model which is composed of perception module for see and think and driving module for behave, and trained it with multi-task perception-related basic knowledge and driving knowledge stepwisely. Specifically segmentation map and depth map (pixel level understanding of images) were considered as what & where and how far knowledge for tackling easier drivingrelated perception problems before generating final control commands for difficult driving task. The results of experiments demonstrated the effectiveness of multitask perception knowledge for better generalization and accident explanation ability. With our method the average sucess rate of finishing most difficult navigation tasks in untrained city of CoRL test surpassed current benchmark method for 15 percent in trained weather and 20 percent in untrained weathers.
1 INTRODUCTION
Observing progressive improvement in various fields of pattern recognition with end-to-end deep learning based methods(Krizhevsky et al., 2012; Girshick, 2015), self-driving researchers try to revolutionize autonomous car field with the help of end-to-end deep learning techniques(Bojarski et al., 2016b; Chen et al., 2015; Codevilla et al., 2017). Impressive results have been acquired by mapping camera images directly to driving control commands(Bojarski et al., 2016b) with simple structure similar to ones for image classfication task(Simonyan & Zisserman, 2014). Further researches were conducted to improve the performance of deep learning based autonomous driving system, for example, Conditional Imitation Learning(Codevilla et al., 2017) approach has been proposed to solve the ambigious action problem.
However, two crutial problems failed to be spotted: (1) Poor generalization ability of unobserved driving environment given limited diversity of training scenerios. For example, though Dosovitskiy et al. (2017) addressed the driving direction selection problem, it showed poor generalization ability in unseen test town which has different map and building structure than training town’s. This generalization problem is extremely important since collected driving dataset always has limitation of diversity (2) Current end-to-end autonomous approaches lack of accident explanation ability when these models behave unexpectedly. Although saliency map based visualization methods(Smilkov et al., 2017; Sundararajan et al., 2017; Springenberg et al., 2014; Bojarski et al., 2016a) have been proposed to dig into the ’black box’, the only information these methods could bring is the possible attention of the model instead of the perception process of the model.
We proposed a new driving approach to solve the two aforementioned problems by using multi-task basic perception knowledge. We argue that when end-to-end model is trained to address a specific difficult task, it’s better to train the model with some basic knowledge to solve relevant easier tasks
before(Pan et al., 2010). An analogy for this can be observed when human beings learn a difficult knowledge. For example, to solve a complex integration problem, compared with students without basic math knowledge, students who know about basic knowledge of math are able to learn the core of intergration more quickly and solve other similar integration problems instead of memorizing the solution of the specific problem.
Our proposed model consists of two modules: perception module and driving module as in Fig. 1. The perception module is used for learning easier driving-related perception knowledge, which we refer as ability of pixel level understanding of input including what & where and how far knowledge. We trained perception module with segmentation map and depth map first, while the former serves as what & where knowledge and the latter serves as how far knowledge. By visualizing inferenced segmentation and depth results whether perception process works well or not could be inferred. After the perception module was trained to have ability of pixel level understanding of its image input, we freezed the perception module weights and trained driving module with driving dataset. This decomposition of end-to-end driving network strucuture is considered to be mediated perception approach(Ullman, 1980). With our proposed driving structure and stepwise training strategy, the generalization and accident explanation problems were addressed to a certain extent.
2 RELATED WORK
Depending on whether mediated perception knowledge are generated, self-driving models are categorized into mediated perception approach(Ullman, 1980) and behavior reflex approach.
For mediated perception approaches, there are several well-behaved deep learning methods, for example, Deep-Driving method(Chen et al., 2015) fisrtly converts input RGB images to some key perception indicators related to final driving controls. They designed a very simple driving controller based on predicted perception indicators. Problem of this approach is that key perception indicators have limitation of describing unseen scenerios and are difficult to collect in reality. Except for inferencing for final driving controls, there are approaches which focus on inferencing intermediate description of driving situation only. For separate scene understanding task, car detection(Lenz et al., 2011) and lane detection(Aly, 2008) are two main topics in this area.
Instead of inferencing one perception task at most, multi-task learning method aims at tackling several relevant tasks simultaneously. Teichmann et al. (2016) uses input image to solve both object detection and road segmentation tasks. Branched E-Net(Neven et al., 2017) not only infers for segmentation map, but also depth map of current driving scenarios. These multi-task learning methods shows better result when sharing the encoder of different perception tasks together, but they haven’t really tried to make the car drive either in simulator or reality.
As for behavior reflex approach which is also called ’end-to-end learning’, NVIDIA firstly proposed a model for mapping input image pixels directly to final driving control output(steer only)(Bojarski et al., 2016b). Some other approaches further atempted to create more robust models, for example, long short-term memory (LSTM) was utilized to make driving models store a memory of past(Chi & Mu, 2017).
One problem is that aforementioned methods were tested in dissimlar driving scenerios using different driving dataset, thus it’s hard to determine if model itself is the source of the better driving behavior instead of effectiveness of data(Sun et al., 2017).
Codevilla et al. (2017) was tested in a public urban driving simulator Dosovitskiy et al. (2017) and sucessed to tackle the ambigous action problem which refers as optimal driving action can’t be inferred from perceptual input alone. Benefit from CoRL test in Dosovitskiy et al. (2017), fair comparision could be conducted using same driving dataset. Codevilla et al. (2017) showed limitation of generalization ability problem in test town different from train town(Dosovitskiy et al., 2017) as in CoRL test training dataset could be only collected from single train town.
When the end-to-end driving method behaves badly and causes accidents, accident explanation ability is required. Though saliency-map based visualization methods(Bojarski et al., 2016a; Smilkov et al., 2017) help understand the influence of input on final driving control, it’s extremely hard to derive which module of the model fails when driving problems happen — If the model percepts incorrectly or the driving inference processes wrongly based on good perception information. Driving system was enabled to give quantitative explanation by visualizing inferenced multi-task basic knowledge to solve this problem.
3 FRAMEWORK OF PROPOSED SYSTEM
Basic strucure of the proposed model is shown in Fig. 1. The proposed model has two parts: (1) Multi-task basic knowledge perception module (2) Driving decision branch module. The perception module is used to percept the world by inferencing dpeth map and segmentation map, which is composed of one shared encoder and two decoders for two different basic perception knowledge: (1) Segmentation decoder for generating ’what & where’ information by predicting segmentation maps; (2) Depth decoder for predicting ’how far’ the objects in vision are by inferencing depth maps. The perception module is aimed at extracting encoded feature map containing pixel level understanding information for driving module and qualitative explanation when proposed model doesn’t work as expected by visualizing the predicted segmentation and depth maps to determine if the driving problem is caused by percept process or driving process.
The driving module enbales the model to generate driving decisions for different direction following guidances. We categorized the real world driving guidance into four types: (1) Following lane (2) Turning left (3) Going straight (4) Turning right as done in Codevilla et al. (2017). For each driving guidance direction, there is a driving branch(which predicts the value of driving controls) corresponding to it, therefore there are four driving guidance branches totally. The output of second last layer in perception module is inputted to the driving module, therefore the training of which could benefit from the multi-knowledge extracted by the perception module. Instead of linear layers, convolution layers are utilized for inferencing final driving controls for each direction, which helps keeping the spatial relation of information and reducing number of parameters as non-negligible quantity of direction branches.
3.1 MULTI-TASK BASIC KNOWLEDGE PERCEPTION MODULE
The perception module is built with residual block proposed in (He et al., 2016) which solves gradient vanishing and ’degradation problem’, and it has a structure similar to Segnet(Badrinarayanan et al., 2015) prosposed for efficient image segmentation task. Huge difference is that in our proposed method there are two different decoders for inferencing both segmentation and depth maps simultaneously instead of segmentation map only. Besides, we constraint the total strides in encoder to 8 for keeping resolution of feature map, as large total stride has negative influence on feature map size reconstuction. Hybrid Dilated Convolution Wang et al. (2017) is adapted as last part of the encoder as it enlarges the receptive field and avoids theoretical issue of gridding problem. Groupout(Park) is also adapted to avoid overfitting problem in the convolution network.
3.2 DRIVING DECISION BRANCH MODULE
The driving module is built with residual block and has a general form as Codevilla et al. (2017) in last output layer for several direction outputs. It is all based on convolutional layers in order to keep the spatial information and reduce parameters motivated by Springenberg et al. (2014). Four different high level driving guidance such as ”turning right” are utilized for selecting which direction branch’s output is supposed to be considered as final driving outputs. Driving outputs contain steering and acceleration/brake, both of them range from -1 to 1. Since there are 4 output branches corresponding to 4 high level driving guidances, 8 feature map size convolution kernels are set in the last layer for output scalar value, in which each two are regarded as driving controls for one driving guidance. To determine the limitation of RGB image, no other information such as current speed or steering angle were used as input. Instead we atempted to predict the current speed
based on current RGB image to keep the driving smoothly as done in (Codevilla et al., 2017). The input of the driving module is not from the last layer’s output of the encoder part in the perception module, but the second last layer’s output of the encoder part due to empirically selection for best generalization.
4 EXPERIMENTS
4.1 SYSTEM SETUP
The training dataset is collected in CARLA simulator(Dosovitskiy et al., 2017). CARLA simulator is a self-driving simulator developed by Intel Co. for collecting self-driving related information and evaluating driving model with a standard testing environment named CoRL test. CoRL test is composed of 4 tasks of increasing difficulty: (1) Straight: the goal is straight ahead of the starting position. (2) One turn: getting to the goal takes one turn, left or right. (3) Navigation: navigation with an arbitrary number of turns. (4) Navigation with dynamic obstacles: same as previous task, but with other vehicles and pedestrians(Dosovitskiy et al., 2017).
The main metric for quantitatively evaluating is the average success rate of finishing seperate tasks in CoRL test. CoRL test contains tests both in trained town and untrained town under both trained and untrained weathers. Test trained town and untrained town are constructed with different maps and different building texture.
4.2 DATASET
Dataset for training our model could be categorized into 2 items: (1) Perception module training dataset (2) Driving module training dataset. For perception module, we trained it with 35,000 pairs of RGB images, segmentation and depth maps and evaluated with 5,000 pairs. As for driving module, we trained with 455,000 dataset, and evaluated on 62,000 evaluation dataset. Before training our proposed model, two vital data processing methods were used: balancing dataset and data augmentation.
For fair comparison, we use same driving dataset published by Conditional Imitation Learning(Codevilla et al., 2017) except that we collected extra segmentation and depth maps in train town for training our proposed perception module.
4.2.1 DATA BALANCING
Dataset balancing contributed to better generalization of both perception module and driving module in our experiments as it enables each mini-batch to be a microcosm of the whole dataset. For perception module, dataset were balanced to ensure that each mini-batch contains all different training weathers and an equal amount of going straight and turning situations. For driving module, we balance each training mini-batch to ensure equally distribution of different driving direction guidance, and reorganized large steer(absolute value larger than 0.4) data accounts for 1/3 in each mini-batch, brake situation data acounts for 1/3, noise steer situation for 1/10.
4.2.2 DATA AUGMENTATION
We add guassian noise, coarse dropout, contrast normalization, Guassian blur to both training dataset for perception and driving module for enlarging training dataset distribution.
4.3 TRAINING DETAILS
We trained the whole system using a step-wise training method which is firstly we trained the perception module with multi-task basic perception knowledge, then we freezed the weights of perception module and train driving module with driving dataset. For training the perception module, we used mini-batch size 24 and set ratio of segmentation loss and depth loss to be 1.5:1. Softmax categorical crossentropy is used for segmentation loss, binary crossentropy is used for depth loss. Adam of 0.001, which is multiplied by a factor of 0.2 of previous learning rate if validation loss does’t drop for 1 epoch is used as optmizer. L2 weight decay and early stopping are also used for avoid overfitting. As for training the driving module, we consider MSE loss and use Adam with starting learning rate of 0.002 which exponentially decay of 0.9 every epoch. Early stopping and L2 decay are used for regularization.
5 EXPERIMENTS & RESULTS
5.1 GENERALIZATION ABILITY TEST
We compare the results of driving performance between our proposal and other methods tested in CoRL test via success rate of finishing each task. The details of results are shown in Table. 1
From Table. 1, though our proposal finished slightly less in training conditions comparing with other methods, our proposal achieved much higher success rate in untrained town environments, which demonstrates our model has much better generalization ability of adapting to untrain town than other methods tested in the CoRL test when trained with limited diversity of training conditions. One important notice is that we use the almost the same driving dataset for training as the method Codevilla et al. (2017) showed in the Table. ??.
We could also visualize the perception process when the model works. One example of test in untrained town and untrained weather is shown in Fig. 2.
5.2 ORIGIN OF BETTER GERNERALIZATION ABILITY
Since we observed our training model has better generalization ability in unseen town comparing with other methods when almost the same driving dataset were used to train (except that we collected extra depth maps and segmentation maps in same training environments), we want to investigate the origin of the better generalization ability. There are two possible reasons why our training model has better generalization ability in unseen town: (1) Basic knowledge (segmentation map and depth map) (2) Network structure. Therefore we conduct experiments by comparing performance of two methods:
• Our original proposal: Firstly train perception module with basic knowledge, after training perception module, freeze its weights and train driving module with driving dataset • Compared method: Train the encoder of perception module and driving module together
with driving dataset. No basic perception knowledge is used for training model.
Since tests in CoRL cost much time, we limited our evaluation to the most difficult untrained town under untrained weathers test. Results are shown in Table. 2. From the results it’s obvious that multibasic knowledge we use in the training phase is the origin of our proposal’s good generalization ability of untrained town instead of the network structure. Moreover, the network structure could be improved to achieve better performance in the furture.
5.3 QUALITATIVE CAUSE EXPLANATION ABILITY OF DRIVING PROBLEMS
Besides basic knowledge leads to better generalization ability, it could also be used to give a qualitative explanation of driving problems. Basic knowledge of segmentation map and depth map
are output from the perception module during test phase, therefore how the driving module percepts the current scenario could be known by simply visualizing the outputs of segmentation and depth from perception module. Depending on the predicted pixel understanding of the situation, cause of driving problem could be inferred.
One example is shown in Fig. 3. For a failed straight task in untrained town under untrained weather soft rain sunset as the driving model failed to move forward, we visualized outputs of segmentation and depth maps predicted by perception module. It’s obvious that this failure case is caused by the perception module since the model falsely percepted that there is a car in front of it and in order to avoid collision it did’t start. There is no car actually thus the perception module made false judgement. However, what’s interesting is that sometimes it fools readers to think that there is a car in Fig. 3 because of sun ray reflection on the wet road and the perception module has the similar understanding as these readers. Therefore in some aspects the perception module makes the right judgement instead of wrong’s. For traditional end-to-end learning driving methods(Bojarski et al., 2016b) it’s impossible to reason as they don’t focus on the cause explanation ability which is of great importance for practice use of deep learning driving models.
5.4 FINE-TUNE TEST
Fine-tuneYosinski et al. (2014), which refers to use other well-trained models weights on different target-related dataset as initial weights for training with target dataset instead of using weights initializing methods(Glorot & Bengio, 2010; He et al., 2015; LeCun et al., 2012), is a common trick used in deep learning since empirically it could leads to better generalization on new target dataset. Here in our specific case we refer fine-tune method to be after training perception module we train the weights of the encoder of the perception module as well instead of freezing these weights. In
Table. 3 we compare the performance of fine-tune method and our original proposed method.
In this comparison we achieved counter-intuition results: after fine-tune the weights of the perception module the driving model achieved worse results than original method which freeze the weights of perception module when training the driving module.
One possible reason is that the generalization ability lies in the perception module instead of the driving module, therefore when we train the perception module again with driving dataset, the ability of generating compressed multi-knowdege information is destoryed. As the fine-tune model couldn’t benefit from the multi-task knowledge anymore, it failed to produce the same generalization ability as the original proposal did.
Furthermore we conduct experiment on visualizing one direction of loss surface by projecting the loss surface to 2 dimension(Goodfellow & Vinyals, 2014) to investigate some qualitative explanation for this comparison result. x axis corresponds to linear interpolation of the weights of original proposed method and weights of compared fine-tuned method after training. Formulation of calculating the weights in this projection direction is Equation. 1.
α ∈ [−1, 2], f(αxfinetune + (1− α)xrgb0) (1)
α is linear interpolation ratio, xfintune and xrgb0 are trained weights of fine-tune method and original proposal method. f(x) is loss function of the whole model while input is considered as different model weights. We draw out the projected loss surface as Fig. 4 by sampling from the interpolation weigthts. From Fig. 4 we can get one possible qualitative reason for worse behavior of fine-tune method from a loss surface perspective: Model weight got by using fine-tune method is stuck in a super flat surface, while model weights of original proposed method successfully finds a local minimum.
6 CONCLUSION
In this paper we propose a new driving system for better generalization and accident explanation ability by enabling it to do simpler driving-related perception task before generating commands for diffult driving task. Through multiple experiments we empirically proved the effectiveness of the multi basic perception knowledge for better generalization ability of unobserved town when diversity of training dataset is limited. Besides our proposed model has self-explanation ability by visualizing the predicted segmentation and depth maps from the perception module to determine the cause of driving problems when they happen. One interesting result we acquired by comparing different train strategies is that the generalization ability of driving origins from basic knowledge and lies in weights of the perception module which should not be modified during training with driving dataset. We hope our work could movitivate other researches to use multi-task target related perception knowledge for better performance in robot learning. In future we will investigate more effective network structures.
ACKNOWLEDGMENTS
Thanks to all Prof.Ogata lab members especially Kamuza SASAKI san who teaches me about Deep Learning patiently when I have zero knowledge of what it is. Great thanks to my bros Zehai TU and Pengfei LI who support me no matter how annoying I am in the midnight. Final thanks to my homie Mengcheng SONG for being a Hiphop guide for me and makes me understand about the importance of always ’keep it real’. | 1. What is the main contribution of the paper in terms of multi-task learning architecture for depth and segmentation map estimation?
2. What are the strengths of the proposed approach, particularly in its simplicity and efficiency?
3. Do you have any concerns regarding the novelty of the multi-task learning approach, especially in comparison to prior works such as Xu et al.'s paper?
4. How do you assess the quality and convincingness of the experimental results, given the limited size of the evaluation data?
5. Are there any typos or grammatical errors that need to be addressed in the paper? | Review | Review
This paper presents one end-to-end multi-task learning architecture for depth & segmentation map estimation and the driving prediction. The whole architecture is composed of two components, the first one is the perception module (segmentation and depth map inference), the second one is the driving decision module. The training process is sequential, initially train the perception module, then train the driving decision task with freezing the weights of the perception module. The author evaluated the proposed approach on one simulated dataset, Experimental results demonstrated the advantage of multi-task compared to the single task.
Advantages:
The pipeline is also easy to understand, it is simple and efficient based on the provided results.
The proposed framework aims to give better understanding of the application of deep learning in self-driving car project. Such as the analysis and illustration in Figure 3.
Questions:
There are several typos needed to be addressed. E.g, the question mark in Fig index of section 5.1. There should be comma in the second sentence at the last paragraph of section 5.2.
Multi-task, especially the segmentation part is not novel for self-driving car prediction, such as Xu et al. CVPR’ 17 paper from Berkeley. The experiment for generalization shows the potential advancement, however, it is less convincing with the limited size of the evaluation data, The authors discussed about how to analyze the failure causes, however, if the perception learning model does not work well, then it would be hard to analyze the reason of incorrectly prediction.
In general, the paper has the merits and these investigations may be helpful for this problem, but it is not good enough for ICLR. |
ICLR | Title
RETHINKING SELF-DRIVING : MULTI -TASK KNOWLEDGE FOR BETTER GENERALIZATION AND ACCIDENT EXPLANATION ABILITY
Abstract
Current end-to-end deep learning driving models have two problems: (1) Poor generalization ability of unobserved driving environment when diversity of training driving dataset is limited (2) Lack of accident explanation ability when driving models don’t work as expected. To tackle these two problems, rooted on the believe that knowledge of associated easy task is benificial for addressing difficult task, we proposed a new driving model which is composed of perception module for see and think and driving module for behave, and trained it with multi-task perception-related basic knowledge and driving knowledge stepwisely. Specifically segmentation map and depth map (pixel level understanding of images) were considered as what & where and how far knowledge for tackling easier drivingrelated perception problems before generating final control commands for difficult driving task. The results of experiments demonstrated the effectiveness of multitask perception knowledge for better generalization and accident explanation ability. With our method the average sucess rate of finishing most difficult navigation tasks in untrained city of CoRL test surpassed current benchmark method for 15 percent in trained weather and 20 percent in untrained weathers.
1 INTRODUCTION
Observing progressive improvement in various fields of pattern recognition with end-to-end deep learning based methods(Krizhevsky et al., 2012; Girshick, 2015), self-driving researchers try to revolutionize autonomous car field with the help of end-to-end deep learning techniques(Bojarski et al., 2016b; Chen et al., 2015; Codevilla et al., 2017). Impressive results have been acquired by mapping camera images directly to driving control commands(Bojarski et al., 2016b) with simple structure similar to ones for image classfication task(Simonyan & Zisserman, 2014). Further researches were conducted to improve the performance of deep learning based autonomous driving system, for example, Conditional Imitation Learning(Codevilla et al., 2017) approach has been proposed to solve the ambigious action problem.
However, two crutial problems failed to be spotted: (1) Poor generalization ability of unobserved driving environment given limited diversity of training scenerios. For example, though Dosovitskiy et al. (2017) addressed the driving direction selection problem, it showed poor generalization ability in unseen test town which has different map and building structure than training town’s. This generalization problem is extremely important since collected driving dataset always has limitation of diversity (2) Current end-to-end autonomous approaches lack of accident explanation ability when these models behave unexpectedly. Although saliency map based visualization methods(Smilkov et al., 2017; Sundararajan et al., 2017; Springenberg et al., 2014; Bojarski et al., 2016a) have been proposed to dig into the ’black box’, the only information these methods could bring is the possible attention of the model instead of the perception process of the model.
We proposed a new driving approach to solve the two aforementioned problems by using multi-task basic perception knowledge. We argue that when end-to-end model is trained to address a specific difficult task, it’s better to train the model with some basic knowledge to solve relevant easier tasks
before(Pan et al., 2010). An analogy for this can be observed when human beings learn a difficult knowledge. For example, to solve a complex integration problem, compared with students without basic math knowledge, students who know about basic knowledge of math are able to learn the core of intergration more quickly and solve other similar integration problems instead of memorizing the solution of the specific problem.
Our proposed model consists of two modules: perception module and driving module as in Fig. 1. The perception module is used for learning easier driving-related perception knowledge, which we refer as ability of pixel level understanding of input including what & where and how far knowledge. We trained perception module with segmentation map and depth map first, while the former serves as what & where knowledge and the latter serves as how far knowledge. By visualizing inferenced segmentation and depth results whether perception process works well or not could be inferred. After the perception module was trained to have ability of pixel level understanding of its image input, we freezed the perception module weights and trained driving module with driving dataset. This decomposition of end-to-end driving network strucuture is considered to be mediated perception approach(Ullman, 1980). With our proposed driving structure and stepwise training strategy, the generalization and accident explanation problems were addressed to a certain extent.
2 RELATED WORK
Depending on whether mediated perception knowledge are generated, self-driving models are categorized into mediated perception approach(Ullman, 1980) and behavior reflex approach.
For mediated perception approaches, there are several well-behaved deep learning methods, for example, Deep-Driving method(Chen et al., 2015) fisrtly converts input RGB images to some key perception indicators related to final driving controls. They designed a very simple driving controller based on predicted perception indicators. Problem of this approach is that key perception indicators have limitation of describing unseen scenerios and are difficult to collect in reality. Except for inferencing for final driving controls, there are approaches which focus on inferencing intermediate description of driving situation only. For separate scene understanding task, car detection(Lenz et al., 2011) and lane detection(Aly, 2008) are two main topics in this area.
Instead of inferencing one perception task at most, multi-task learning method aims at tackling several relevant tasks simultaneously. Teichmann et al. (2016) uses input image to solve both object detection and road segmentation tasks. Branched E-Net(Neven et al., 2017) not only infers for segmentation map, but also depth map of current driving scenarios. These multi-task learning methods shows better result when sharing the encoder of different perception tasks together, but they haven’t really tried to make the car drive either in simulator or reality.
As for behavior reflex approach which is also called ’end-to-end learning’, NVIDIA firstly proposed a model for mapping input image pixels directly to final driving control output(steer only)(Bojarski et al., 2016b). Some other approaches further atempted to create more robust models, for example, long short-term memory (LSTM) was utilized to make driving models store a memory of past(Chi & Mu, 2017).
One problem is that aforementioned methods were tested in dissimlar driving scenerios using different driving dataset, thus it’s hard to determine if model itself is the source of the better driving behavior instead of effectiveness of data(Sun et al., 2017).
Codevilla et al. (2017) was tested in a public urban driving simulator Dosovitskiy et al. (2017) and sucessed to tackle the ambigous action problem which refers as optimal driving action can’t be inferred from perceptual input alone. Benefit from CoRL test in Dosovitskiy et al. (2017), fair comparision could be conducted using same driving dataset. Codevilla et al. (2017) showed limitation of generalization ability problem in test town different from train town(Dosovitskiy et al., 2017) as in CoRL test training dataset could be only collected from single train town.
When the end-to-end driving method behaves badly and causes accidents, accident explanation ability is required. Though saliency-map based visualization methods(Bojarski et al., 2016a; Smilkov et al., 2017) help understand the influence of input on final driving control, it’s extremely hard to derive which module of the model fails when driving problems happen — If the model percepts incorrectly or the driving inference processes wrongly based on good perception information. Driving system was enabled to give quantitative explanation by visualizing inferenced multi-task basic knowledge to solve this problem.
3 FRAMEWORK OF PROPOSED SYSTEM
Basic strucure of the proposed model is shown in Fig. 1. The proposed model has two parts: (1) Multi-task basic knowledge perception module (2) Driving decision branch module. The perception module is used to percept the world by inferencing dpeth map and segmentation map, which is composed of one shared encoder and two decoders for two different basic perception knowledge: (1) Segmentation decoder for generating ’what & where’ information by predicting segmentation maps; (2) Depth decoder for predicting ’how far’ the objects in vision are by inferencing depth maps. The perception module is aimed at extracting encoded feature map containing pixel level understanding information for driving module and qualitative explanation when proposed model doesn’t work as expected by visualizing the predicted segmentation and depth maps to determine if the driving problem is caused by percept process or driving process.
The driving module enbales the model to generate driving decisions for different direction following guidances. We categorized the real world driving guidance into four types: (1) Following lane (2) Turning left (3) Going straight (4) Turning right as done in Codevilla et al. (2017). For each driving guidance direction, there is a driving branch(which predicts the value of driving controls) corresponding to it, therefore there are four driving guidance branches totally. The output of second last layer in perception module is inputted to the driving module, therefore the training of which could benefit from the multi-knowledge extracted by the perception module. Instead of linear layers, convolution layers are utilized for inferencing final driving controls for each direction, which helps keeping the spatial relation of information and reducing number of parameters as non-negligible quantity of direction branches.
3.1 MULTI-TASK BASIC KNOWLEDGE PERCEPTION MODULE
The perception module is built with residual block proposed in (He et al., 2016) which solves gradient vanishing and ’degradation problem’, and it has a structure similar to Segnet(Badrinarayanan et al., 2015) prosposed for efficient image segmentation task. Huge difference is that in our proposed method there are two different decoders for inferencing both segmentation and depth maps simultaneously instead of segmentation map only. Besides, we constraint the total strides in encoder to 8 for keeping resolution of feature map, as large total stride has negative influence on feature map size reconstuction. Hybrid Dilated Convolution Wang et al. (2017) is adapted as last part of the encoder as it enlarges the receptive field and avoids theoretical issue of gridding problem. Groupout(Park) is also adapted to avoid overfitting problem in the convolution network.
3.2 DRIVING DECISION BRANCH MODULE
The driving module is built with residual block and has a general form as Codevilla et al. (2017) in last output layer for several direction outputs. It is all based on convolutional layers in order to keep the spatial information and reduce parameters motivated by Springenberg et al. (2014). Four different high level driving guidance such as ”turning right” are utilized for selecting which direction branch’s output is supposed to be considered as final driving outputs. Driving outputs contain steering and acceleration/brake, both of them range from -1 to 1. Since there are 4 output branches corresponding to 4 high level driving guidances, 8 feature map size convolution kernels are set in the last layer for output scalar value, in which each two are regarded as driving controls for one driving guidance. To determine the limitation of RGB image, no other information such as current speed or steering angle were used as input. Instead we atempted to predict the current speed
based on current RGB image to keep the driving smoothly as done in (Codevilla et al., 2017). The input of the driving module is not from the last layer’s output of the encoder part in the perception module, but the second last layer’s output of the encoder part due to empirically selection for best generalization.
4 EXPERIMENTS
4.1 SYSTEM SETUP
The training dataset is collected in CARLA simulator(Dosovitskiy et al., 2017). CARLA simulator is a self-driving simulator developed by Intel Co. for collecting self-driving related information and evaluating driving model with a standard testing environment named CoRL test. CoRL test is composed of 4 tasks of increasing difficulty: (1) Straight: the goal is straight ahead of the starting position. (2) One turn: getting to the goal takes one turn, left or right. (3) Navigation: navigation with an arbitrary number of turns. (4) Navigation with dynamic obstacles: same as previous task, but with other vehicles and pedestrians(Dosovitskiy et al., 2017).
The main metric for quantitatively evaluating is the average success rate of finishing seperate tasks in CoRL test. CoRL test contains tests both in trained town and untrained town under both trained and untrained weathers. Test trained town and untrained town are constructed with different maps and different building texture.
4.2 DATASET
Dataset for training our model could be categorized into 2 items: (1) Perception module training dataset (2) Driving module training dataset. For perception module, we trained it with 35,000 pairs of RGB images, segmentation and depth maps and evaluated with 5,000 pairs. As for driving module, we trained with 455,000 dataset, and evaluated on 62,000 evaluation dataset. Before training our proposed model, two vital data processing methods were used: balancing dataset and data augmentation.
For fair comparison, we use same driving dataset published by Conditional Imitation Learning(Codevilla et al., 2017) except that we collected extra segmentation and depth maps in train town for training our proposed perception module.
4.2.1 DATA BALANCING
Dataset balancing contributed to better generalization of both perception module and driving module in our experiments as it enables each mini-batch to be a microcosm of the whole dataset. For perception module, dataset were balanced to ensure that each mini-batch contains all different training weathers and an equal amount of going straight and turning situations. For driving module, we balance each training mini-batch to ensure equally distribution of different driving direction guidance, and reorganized large steer(absolute value larger than 0.4) data accounts for 1/3 in each mini-batch, brake situation data acounts for 1/3, noise steer situation for 1/10.
4.2.2 DATA AUGMENTATION
We add guassian noise, coarse dropout, contrast normalization, Guassian blur to both training dataset for perception and driving module for enlarging training dataset distribution.
4.3 TRAINING DETAILS
We trained the whole system using a step-wise training method which is firstly we trained the perception module with multi-task basic perception knowledge, then we freezed the weights of perception module and train driving module with driving dataset. For training the perception module, we used mini-batch size 24 and set ratio of segmentation loss and depth loss to be 1.5:1. Softmax categorical crossentropy is used for segmentation loss, binary crossentropy is used for depth loss. Adam of 0.001, which is multiplied by a factor of 0.2 of previous learning rate if validation loss does’t drop for 1 epoch is used as optmizer. L2 weight decay and early stopping are also used for avoid overfitting. As for training the driving module, we consider MSE loss and use Adam with starting learning rate of 0.002 which exponentially decay of 0.9 every epoch. Early stopping and L2 decay are used for regularization.
5 EXPERIMENTS & RESULTS
5.1 GENERALIZATION ABILITY TEST
We compare the results of driving performance between our proposal and other methods tested in CoRL test via success rate of finishing each task. The details of results are shown in Table. 1
From Table. 1, though our proposal finished slightly less in training conditions comparing with other methods, our proposal achieved much higher success rate in untrained town environments, which demonstrates our model has much better generalization ability of adapting to untrain town than other methods tested in the CoRL test when trained with limited diversity of training conditions. One important notice is that we use the almost the same driving dataset for training as the method Codevilla et al. (2017) showed in the Table. ??.
We could also visualize the perception process when the model works. One example of test in untrained town and untrained weather is shown in Fig. 2.
5.2 ORIGIN OF BETTER GERNERALIZATION ABILITY
Since we observed our training model has better generalization ability in unseen town comparing with other methods when almost the same driving dataset were used to train (except that we collected extra depth maps and segmentation maps in same training environments), we want to investigate the origin of the better generalization ability. There are two possible reasons why our training model has better generalization ability in unseen town: (1) Basic knowledge (segmentation map and depth map) (2) Network structure. Therefore we conduct experiments by comparing performance of two methods:
• Our original proposal: Firstly train perception module with basic knowledge, after training perception module, freeze its weights and train driving module with driving dataset • Compared method: Train the encoder of perception module and driving module together
with driving dataset. No basic perception knowledge is used for training model.
Since tests in CoRL cost much time, we limited our evaluation to the most difficult untrained town under untrained weathers test. Results are shown in Table. 2. From the results it’s obvious that multibasic knowledge we use in the training phase is the origin of our proposal’s good generalization ability of untrained town instead of the network structure. Moreover, the network structure could be improved to achieve better performance in the furture.
5.3 QUALITATIVE CAUSE EXPLANATION ABILITY OF DRIVING PROBLEMS
Besides basic knowledge leads to better generalization ability, it could also be used to give a qualitative explanation of driving problems. Basic knowledge of segmentation map and depth map
are output from the perception module during test phase, therefore how the driving module percepts the current scenario could be known by simply visualizing the outputs of segmentation and depth from perception module. Depending on the predicted pixel understanding of the situation, cause of driving problem could be inferred.
One example is shown in Fig. 3. For a failed straight task in untrained town under untrained weather soft rain sunset as the driving model failed to move forward, we visualized outputs of segmentation and depth maps predicted by perception module. It’s obvious that this failure case is caused by the perception module since the model falsely percepted that there is a car in front of it and in order to avoid collision it did’t start. There is no car actually thus the perception module made false judgement. However, what’s interesting is that sometimes it fools readers to think that there is a car in Fig. 3 because of sun ray reflection on the wet road and the perception module has the similar understanding as these readers. Therefore in some aspects the perception module makes the right judgement instead of wrong’s. For traditional end-to-end learning driving methods(Bojarski et al., 2016b) it’s impossible to reason as they don’t focus on the cause explanation ability which is of great importance for practice use of deep learning driving models.
5.4 FINE-TUNE TEST
Fine-tuneYosinski et al. (2014), which refers to use other well-trained models weights on different target-related dataset as initial weights for training with target dataset instead of using weights initializing methods(Glorot & Bengio, 2010; He et al., 2015; LeCun et al., 2012), is a common trick used in deep learning since empirically it could leads to better generalization on new target dataset. Here in our specific case we refer fine-tune method to be after training perception module we train the weights of the encoder of the perception module as well instead of freezing these weights. In
Table. 3 we compare the performance of fine-tune method and our original proposed method.
In this comparison we achieved counter-intuition results: after fine-tune the weights of the perception module the driving model achieved worse results than original method which freeze the weights of perception module when training the driving module.
One possible reason is that the generalization ability lies in the perception module instead of the driving module, therefore when we train the perception module again with driving dataset, the ability of generating compressed multi-knowdege information is destoryed. As the fine-tune model couldn’t benefit from the multi-task knowledge anymore, it failed to produce the same generalization ability as the original proposal did.
Furthermore we conduct experiment on visualizing one direction of loss surface by projecting the loss surface to 2 dimension(Goodfellow & Vinyals, 2014) to investigate some qualitative explanation for this comparison result. x axis corresponds to linear interpolation of the weights of original proposed method and weights of compared fine-tuned method after training. Formulation of calculating the weights in this projection direction is Equation. 1.
α ∈ [−1, 2], f(αxfinetune + (1− α)xrgb0) (1)
α is linear interpolation ratio, xfintune and xrgb0 are trained weights of fine-tune method and original proposal method. f(x) is loss function of the whole model while input is considered as different model weights. We draw out the projected loss surface as Fig. 4 by sampling from the interpolation weigthts. From Fig. 4 we can get one possible qualitative reason for worse behavior of fine-tune method from a loss surface perspective: Model weight got by using fine-tune method is stuck in a super flat surface, while model weights of original proposed method successfully finds a local minimum.
6 CONCLUSION
In this paper we propose a new driving system for better generalization and accident explanation ability by enabling it to do simpler driving-related perception task before generating commands for diffult driving task. Through multiple experiments we empirically proved the effectiveness of the multi basic perception knowledge for better generalization ability of unobserved town when diversity of training dataset is limited. Besides our proposed model has self-explanation ability by visualizing the predicted segmentation and depth maps from the perception module to determine the cause of driving problems when they happen. One interesting result we acquired by comparing different train strategies is that the generalization ability of driving origins from basic knowledge and lies in weights of the perception module which should not be modified during training with driving dataset. We hope our work could movitivate other researches to use multi-task target related perception knowledge for better performance in robot learning. In future we will investigate more effective network structures.
ACKNOWLEDGMENTS
Thanks to all Prof.Ogata lab members especially Kamuza SASAKI san who teaches me about Deep Learning patiently when I have zero knowledge of what it is. Great thanks to my bros Zehai TU and Pengfei LI who support me no matter how annoying I am in the midnight. Final thanks to my homie Mengcheng SONG for being a Hiphop guide for me and makes me understand about the importance of always ’keep it real’. | 1. What is the main contribution of the paper, and how does it address the problem of black boxes in end-to-end self-driving systems?
2. How effective is the proposed method in making accurate predictions and providing explanations for potential failures?
3. Are there any limitations or weaknesses in the approach suggested by the paper?
4. How does the reviewer assess the organization, style, technical accuracy, and adequacy of citations in the paper?
5. Do you have any suggestions for improving the proposed method or addressing the remaining challenges in self-driving scene representation? | Review | Review
Major Contribution:
This paper details a method for a modified end-to-end architecture that has better generalization and explanation ability. The paper outlines a method for this, implemented using an autoencoder for an efficient feature extractor. By first training an autoencoder to ensure the encoder captures enough depth and segmentation information and then using the processed information as a more useful and compressed new input to train a regression model. The author claimed that this model is more robust to a different testing setting and by observing the output of the decoder, it can help us debug the model when it makes a wrong prediction.
Organization/Style:
The paper is well written, organized, and clear on most points. A few minor points:
1) On page 5, the last sentence, there is a missing table number.
2) I don't think the last part FINE-TUNE Test is necessary since there are no formal proofs and only speculations.
Technical Accuracy:
The problem that the paper is trying to address is the black-box problem in the end-to-end self-driving system.
The paper proposes a method by constructing a depth image and a segmentation mask autoencoder. Though it has been proved that it is effective in making the right prediction and demonstrated that it has the cause explanation ability for possible prediction failures. I have a few points:
The idea makes sense and the model will always perform better when the given input captures more relevant and saturated representations. The paper listed two important features: depth information and segmentation information. But there are other important features that are missing. In other words, when the decoder performs bad, it means the encoder doesn't capture the good depth and segmentation features, then it will be highly possible that the model performs badly as well. However, when the model performs bad, it does not necessarily mean the decoder will perform badly since there might be other information missing, for example, failure to detect the object, lines and traffic lights etc.
In conclusion, the question is really how to get a good representation of a self-driving scene. I don't think to design two simple autoencoders for depth image construction and image segmentation is enough. It works apparently but it is not good enough.
Adequacy of Citations:
Good coverage of literature in self-driving. |
ICLR | Title
RETHINKING SELF-DRIVING : MULTI -TASK KNOWLEDGE FOR BETTER GENERALIZATION AND ACCIDENT EXPLANATION ABILITY
Abstract
Current end-to-end deep learning driving models have two problems: (1) Poor generalization ability of unobserved driving environment when diversity of training driving dataset is limited (2) Lack of accident explanation ability when driving models don’t work as expected. To tackle these two problems, rooted on the believe that knowledge of associated easy task is benificial for addressing difficult task, we proposed a new driving model which is composed of perception module for see and think and driving module for behave, and trained it with multi-task perception-related basic knowledge and driving knowledge stepwisely. Specifically segmentation map and depth map (pixel level understanding of images) were considered as what & where and how far knowledge for tackling easier drivingrelated perception problems before generating final control commands for difficult driving task. The results of experiments demonstrated the effectiveness of multitask perception knowledge for better generalization and accident explanation ability. With our method the average sucess rate of finishing most difficult navigation tasks in untrained city of CoRL test surpassed current benchmark method for 15 percent in trained weather and 20 percent in untrained weathers.
1 INTRODUCTION
Observing progressive improvement in various fields of pattern recognition with end-to-end deep learning based methods(Krizhevsky et al., 2012; Girshick, 2015), self-driving researchers try to revolutionize autonomous car field with the help of end-to-end deep learning techniques(Bojarski et al., 2016b; Chen et al., 2015; Codevilla et al., 2017). Impressive results have been acquired by mapping camera images directly to driving control commands(Bojarski et al., 2016b) with simple structure similar to ones for image classfication task(Simonyan & Zisserman, 2014). Further researches were conducted to improve the performance of deep learning based autonomous driving system, for example, Conditional Imitation Learning(Codevilla et al., 2017) approach has been proposed to solve the ambigious action problem.
However, two crutial problems failed to be spotted: (1) Poor generalization ability of unobserved driving environment given limited diversity of training scenerios. For example, though Dosovitskiy et al. (2017) addressed the driving direction selection problem, it showed poor generalization ability in unseen test town which has different map and building structure than training town’s. This generalization problem is extremely important since collected driving dataset always has limitation of diversity (2) Current end-to-end autonomous approaches lack of accident explanation ability when these models behave unexpectedly. Although saliency map based visualization methods(Smilkov et al., 2017; Sundararajan et al., 2017; Springenberg et al., 2014; Bojarski et al., 2016a) have been proposed to dig into the ’black box’, the only information these methods could bring is the possible attention of the model instead of the perception process of the model.
We proposed a new driving approach to solve the two aforementioned problems by using multi-task basic perception knowledge. We argue that when end-to-end model is trained to address a specific difficult task, it’s better to train the model with some basic knowledge to solve relevant easier tasks
before(Pan et al., 2010). An analogy for this can be observed when human beings learn a difficult knowledge. For example, to solve a complex integration problem, compared with students without basic math knowledge, students who know about basic knowledge of math are able to learn the core of intergration more quickly and solve other similar integration problems instead of memorizing the solution of the specific problem.
Our proposed model consists of two modules: perception module and driving module as in Fig. 1. The perception module is used for learning easier driving-related perception knowledge, which we refer as ability of pixel level understanding of input including what & where and how far knowledge. We trained perception module with segmentation map and depth map first, while the former serves as what & where knowledge and the latter serves as how far knowledge. By visualizing inferenced segmentation and depth results whether perception process works well or not could be inferred. After the perception module was trained to have ability of pixel level understanding of its image input, we freezed the perception module weights and trained driving module with driving dataset. This decomposition of end-to-end driving network strucuture is considered to be mediated perception approach(Ullman, 1980). With our proposed driving structure and stepwise training strategy, the generalization and accident explanation problems were addressed to a certain extent.
2 RELATED WORK
Depending on whether mediated perception knowledge are generated, self-driving models are categorized into mediated perception approach(Ullman, 1980) and behavior reflex approach.
For mediated perception approaches, there are several well-behaved deep learning methods, for example, Deep-Driving method(Chen et al., 2015) fisrtly converts input RGB images to some key perception indicators related to final driving controls. They designed a very simple driving controller based on predicted perception indicators. Problem of this approach is that key perception indicators have limitation of describing unseen scenerios and are difficult to collect in reality. Except for inferencing for final driving controls, there are approaches which focus on inferencing intermediate description of driving situation only. For separate scene understanding task, car detection(Lenz et al., 2011) and lane detection(Aly, 2008) are two main topics in this area.
Instead of inferencing one perception task at most, multi-task learning method aims at tackling several relevant tasks simultaneously. Teichmann et al. (2016) uses input image to solve both object detection and road segmentation tasks. Branched E-Net(Neven et al., 2017) not only infers for segmentation map, but also depth map of current driving scenarios. These multi-task learning methods shows better result when sharing the encoder of different perception tasks together, but they haven’t really tried to make the car drive either in simulator or reality.
As for behavior reflex approach which is also called ’end-to-end learning’, NVIDIA firstly proposed a model for mapping input image pixels directly to final driving control output(steer only)(Bojarski et al., 2016b). Some other approaches further atempted to create more robust models, for example, long short-term memory (LSTM) was utilized to make driving models store a memory of past(Chi & Mu, 2017).
One problem is that aforementioned methods were tested in dissimlar driving scenerios using different driving dataset, thus it’s hard to determine if model itself is the source of the better driving behavior instead of effectiveness of data(Sun et al., 2017).
Codevilla et al. (2017) was tested in a public urban driving simulator Dosovitskiy et al. (2017) and sucessed to tackle the ambigous action problem which refers as optimal driving action can’t be inferred from perceptual input alone. Benefit from CoRL test in Dosovitskiy et al. (2017), fair comparision could be conducted using same driving dataset. Codevilla et al. (2017) showed limitation of generalization ability problem in test town different from train town(Dosovitskiy et al., 2017) as in CoRL test training dataset could be only collected from single train town.
When the end-to-end driving method behaves badly and causes accidents, accident explanation ability is required. Though saliency-map based visualization methods(Bojarski et al., 2016a; Smilkov et al., 2017) help understand the influence of input on final driving control, it’s extremely hard to derive which module of the model fails when driving problems happen — If the model percepts incorrectly or the driving inference processes wrongly based on good perception information. Driving system was enabled to give quantitative explanation by visualizing inferenced multi-task basic knowledge to solve this problem.
3 FRAMEWORK OF PROPOSED SYSTEM
Basic strucure of the proposed model is shown in Fig. 1. The proposed model has two parts: (1) Multi-task basic knowledge perception module (2) Driving decision branch module. The perception module is used to percept the world by inferencing dpeth map and segmentation map, which is composed of one shared encoder and two decoders for two different basic perception knowledge: (1) Segmentation decoder for generating ’what & where’ information by predicting segmentation maps; (2) Depth decoder for predicting ’how far’ the objects in vision are by inferencing depth maps. The perception module is aimed at extracting encoded feature map containing pixel level understanding information for driving module and qualitative explanation when proposed model doesn’t work as expected by visualizing the predicted segmentation and depth maps to determine if the driving problem is caused by percept process or driving process.
The driving module enbales the model to generate driving decisions for different direction following guidances. We categorized the real world driving guidance into four types: (1) Following lane (2) Turning left (3) Going straight (4) Turning right as done in Codevilla et al. (2017). For each driving guidance direction, there is a driving branch(which predicts the value of driving controls) corresponding to it, therefore there are four driving guidance branches totally. The output of second last layer in perception module is inputted to the driving module, therefore the training of which could benefit from the multi-knowledge extracted by the perception module. Instead of linear layers, convolution layers are utilized for inferencing final driving controls for each direction, which helps keeping the spatial relation of information and reducing number of parameters as non-negligible quantity of direction branches.
3.1 MULTI-TASK BASIC KNOWLEDGE PERCEPTION MODULE
The perception module is built with residual block proposed in (He et al., 2016) which solves gradient vanishing and ’degradation problem’, and it has a structure similar to Segnet(Badrinarayanan et al., 2015) prosposed for efficient image segmentation task. Huge difference is that in our proposed method there are two different decoders for inferencing both segmentation and depth maps simultaneously instead of segmentation map only. Besides, we constraint the total strides in encoder to 8 for keeping resolution of feature map, as large total stride has negative influence on feature map size reconstuction. Hybrid Dilated Convolution Wang et al. (2017) is adapted as last part of the encoder as it enlarges the receptive field and avoids theoretical issue of gridding problem. Groupout(Park) is also adapted to avoid overfitting problem in the convolution network.
3.2 DRIVING DECISION BRANCH MODULE
The driving module is built with residual block and has a general form as Codevilla et al. (2017) in last output layer for several direction outputs. It is all based on convolutional layers in order to keep the spatial information and reduce parameters motivated by Springenberg et al. (2014). Four different high level driving guidance such as ”turning right” are utilized for selecting which direction branch’s output is supposed to be considered as final driving outputs. Driving outputs contain steering and acceleration/brake, both of them range from -1 to 1. Since there are 4 output branches corresponding to 4 high level driving guidances, 8 feature map size convolution kernels are set in the last layer for output scalar value, in which each two are regarded as driving controls for one driving guidance. To determine the limitation of RGB image, no other information such as current speed or steering angle were used as input. Instead we atempted to predict the current speed
based on current RGB image to keep the driving smoothly as done in (Codevilla et al., 2017). The input of the driving module is not from the last layer’s output of the encoder part in the perception module, but the second last layer’s output of the encoder part due to empirically selection for best generalization.
4 EXPERIMENTS
4.1 SYSTEM SETUP
The training dataset is collected in CARLA simulator(Dosovitskiy et al., 2017). CARLA simulator is a self-driving simulator developed by Intel Co. for collecting self-driving related information and evaluating driving model with a standard testing environment named CoRL test. CoRL test is composed of 4 tasks of increasing difficulty: (1) Straight: the goal is straight ahead of the starting position. (2) One turn: getting to the goal takes one turn, left or right. (3) Navigation: navigation with an arbitrary number of turns. (4) Navigation with dynamic obstacles: same as previous task, but with other vehicles and pedestrians(Dosovitskiy et al., 2017).
The main metric for quantitatively evaluating is the average success rate of finishing seperate tasks in CoRL test. CoRL test contains tests both in trained town and untrained town under both trained and untrained weathers. Test trained town and untrained town are constructed with different maps and different building texture.
4.2 DATASET
Dataset for training our model could be categorized into 2 items: (1) Perception module training dataset (2) Driving module training dataset. For perception module, we trained it with 35,000 pairs of RGB images, segmentation and depth maps and evaluated with 5,000 pairs. As for driving module, we trained with 455,000 dataset, and evaluated on 62,000 evaluation dataset. Before training our proposed model, two vital data processing methods were used: balancing dataset and data augmentation.
For fair comparison, we use same driving dataset published by Conditional Imitation Learning(Codevilla et al., 2017) except that we collected extra segmentation and depth maps in train town for training our proposed perception module.
4.2.1 DATA BALANCING
Dataset balancing contributed to better generalization of both perception module and driving module in our experiments as it enables each mini-batch to be a microcosm of the whole dataset. For perception module, dataset were balanced to ensure that each mini-batch contains all different training weathers and an equal amount of going straight and turning situations. For driving module, we balance each training mini-batch to ensure equally distribution of different driving direction guidance, and reorganized large steer(absolute value larger than 0.4) data accounts for 1/3 in each mini-batch, brake situation data acounts for 1/3, noise steer situation for 1/10.
4.2.2 DATA AUGMENTATION
We add guassian noise, coarse dropout, contrast normalization, Guassian blur to both training dataset for perception and driving module for enlarging training dataset distribution.
4.3 TRAINING DETAILS
We trained the whole system using a step-wise training method which is firstly we trained the perception module with multi-task basic perception knowledge, then we freezed the weights of perception module and train driving module with driving dataset. For training the perception module, we used mini-batch size 24 and set ratio of segmentation loss and depth loss to be 1.5:1. Softmax categorical crossentropy is used for segmentation loss, binary crossentropy is used for depth loss. Adam of 0.001, which is multiplied by a factor of 0.2 of previous learning rate if validation loss does’t drop for 1 epoch is used as optmizer. L2 weight decay and early stopping are also used for avoid overfitting. As for training the driving module, we consider MSE loss and use Adam with starting learning rate of 0.002 which exponentially decay of 0.9 every epoch. Early stopping and L2 decay are used for regularization.
5 EXPERIMENTS & RESULTS
5.1 GENERALIZATION ABILITY TEST
We compare the results of driving performance between our proposal and other methods tested in CoRL test via success rate of finishing each task. The details of results are shown in Table. 1
From Table. 1, though our proposal finished slightly less in training conditions comparing with other methods, our proposal achieved much higher success rate in untrained town environments, which demonstrates our model has much better generalization ability of adapting to untrain town than other methods tested in the CoRL test when trained with limited diversity of training conditions. One important notice is that we use the almost the same driving dataset for training as the method Codevilla et al. (2017) showed in the Table. ??.
We could also visualize the perception process when the model works. One example of test in untrained town and untrained weather is shown in Fig. 2.
5.2 ORIGIN OF BETTER GERNERALIZATION ABILITY
Since we observed our training model has better generalization ability in unseen town comparing with other methods when almost the same driving dataset were used to train (except that we collected extra depth maps and segmentation maps in same training environments), we want to investigate the origin of the better generalization ability. There are two possible reasons why our training model has better generalization ability in unseen town: (1) Basic knowledge (segmentation map and depth map) (2) Network structure. Therefore we conduct experiments by comparing performance of two methods:
• Our original proposal: Firstly train perception module with basic knowledge, after training perception module, freeze its weights and train driving module with driving dataset • Compared method: Train the encoder of perception module and driving module together
with driving dataset. No basic perception knowledge is used for training model.
Since tests in CoRL cost much time, we limited our evaluation to the most difficult untrained town under untrained weathers test. Results are shown in Table. 2. From the results it’s obvious that multibasic knowledge we use in the training phase is the origin of our proposal’s good generalization ability of untrained town instead of the network structure. Moreover, the network structure could be improved to achieve better performance in the furture.
5.3 QUALITATIVE CAUSE EXPLANATION ABILITY OF DRIVING PROBLEMS
Besides basic knowledge leads to better generalization ability, it could also be used to give a qualitative explanation of driving problems. Basic knowledge of segmentation map and depth map
are output from the perception module during test phase, therefore how the driving module percepts the current scenario could be known by simply visualizing the outputs of segmentation and depth from perception module. Depending on the predicted pixel understanding of the situation, cause of driving problem could be inferred.
One example is shown in Fig. 3. For a failed straight task in untrained town under untrained weather soft rain sunset as the driving model failed to move forward, we visualized outputs of segmentation and depth maps predicted by perception module. It’s obvious that this failure case is caused by the perception module since the model falsely percepted that there is a car in front of it and in order to avoid collision it did’t start. There is no car actually thus the perception module made false judgement. However, what’s interesting is that sometimes it fools readers to think that there is a car in Fig. 3 because of sun ray reflection on the wet road and the perception module has the similar understanding as these readers. Therefore in some aspects the perception module makes the right judgement instead of wrong’s. For traditional end-to-end learning driving methods(Bojarski et al., 2016b) it’s impossible to reason as they don’t focus on the cause explanation ability which is of great importance for practice use of deep learning driving models.
5.4 FINE-TUNE TEST
Fine-tuneYosinski et al. (2014), which refers to use other well-trained models weights on different target-related dataset as initial weights for training with target dataset instead of using weights initializing methods(Glorot & Bengio, 2010; He et al., 2015; LeCun et al., 2012), is a common trick used in deep learning since empirically it could leads to better generalization on new target dataset. Here in our specific case we refer fine-tune method to be after training perception module we train the weights of the encoder of the perception module as well instead of freezing these weights. In
Table. 3 we compare the performance of fine-tune method and our original proposed method.
In this comparison we achieved counter-intuition results: after fine-tune the weights of the perception module the driving model achieved worse results than original method which freeze the weights of perception module when training the driving module.
One possible reason is that the generalization ability lies in the perception module instead of the driving module, therefore when we train the perception module again with driving dataset, the ability of generating compressed multi-knowdege information is destoryed. As the fine-tune model couldn’t benefit from the multi-task knowledge anymore, it failed to produce the same generalization ability as the original proposal did.
Furthermore we conduct experiment on visualizing one direction of loss surface by projecting the loss surface to 2 dimension(Goodfellow & Vinyals, 2014) to investigate some qualitative explanation for this comparison result. x axis corresponds to linear interpolation of the weights of original proposed method and weights of compared fine-tuned method after training. Formulation of calculating the weights in this projection direction is Equation. 1.
α ∈ [−1, 2], f(αxfinetune + (1− α)xrgb0) (1)
α is linear interpolation ratio, xfintune and xrgb0 are trained weights of fine-tune method and original proposal method. f(x) is loss function of the whole model while input is considered as different model weights. We draw out the projected loss surface as Fig. 4 by sampling from the interpolation weigthts. From Fig. 4 we can get one possible qualitative reason for worse behavior of fine-tune method from a loss surface perspective: Model weight got by using fine-tune method is stuck in a super flat surface, while model weights of original proposed method successfully finds a local minimum.
6 CONCLUSION
In this paper we propose a new driving system for better generalization and accident explanation ability by enabling it to do simpler driving-related perception task before generating commands for diffult driving task. Through multiple experiments we empirically proved the effectiveness of the multi basic perception knowledge for better generalization ability of unobserved town when diversity of training dataset is limited. Besides our proposed model has self-explanation ability by visualizing the predicted segmentation and depth maps from the perception module to determine the cause of driving problems when they happen. One interesting result we acquired by comparing different train strategies is that the generalization ability of driving origins from basic knowledge and lies in weights of the perception module which should not be modified during training with driving dataset. We hope our work could movitivate other researches to use multi-task target related perception knowledge for better performance in robot learning. In future we will investigate more effective network structures.
ACKNOWLEDGMENTS
Thanks to all Prof.Ogata lab members especially Kamuza SASAKI san who teaches me about Deep Learning patiently when I have zero knowledge of what it is. Great thanks to my bros Zehai TU and Pengfei LI who support me no matter how annoying I am in the midnight. Final thanks to my homie Mengcheng SONG for being a Hiphop guide for me and makes me understand about the importance of always ’keep it real’. | 1. What is the main contribution of the paper regarding end-to-end driving?
2. What are the strengths of the proposed approach, particularly in its simplicity and evaluation method?
3. What are the weaknesses of the paper, especially regarding its novelty and writing quality?
4. How does the reviewer assess the significance and insights of the paper's content?
5. Are there any concerns regarding the reproducibility of the proposed approach? | Review | Review
# Summary
This submission proposes a multi-task convolutional neural network architecture for end-to-end driving (going from an RGB image to controls) evaluated using the CARLA open source simulator. The architecture consists of an encoder and three decoders on top: two for perception (depth prediction and semantic segmentation), and one for driving controls prediction. The network is trained in a two-step supervised fashion: first training the encoder and perception decoders (using depth and semantic segmentation ground truth), second freezing the encoder and training the driving module (imitation learning on demonstrations). The network is evaluated on the standard CARLA benchmark showing better generalization performance in new driving conditions (town and weather) compared to the CARLA baselines (modular pipeline, imitation learning, RL). Qualitative results also show that failure modes are easier to interpret by looking at predicted depth maps and semantic segmentation results.
# Strengths
Simplicity of the approach: the overall architecture described above is simple (cf. Figure 1), combining the benefits of the modular and end-to-end approaches into a feed-forward CNN. The aforementioned two-stage learning algorithm is also explained clearly. Predicted depth maps and semantic segmentation results are indeed more interpretable than attention maps (as traditionally used in end-to-end driving).
Evaluation of the driving policy: the evaluation is done with actual navigation tasks using the CARLA (CoRL'18) benchmark, instead of just off-line behavior cloning accuracy (often used in end-to-end driving papers, easier to overfit to, not guaranteed to transfer to actual driving).
Simple ablative analysis: Table 2 quantifies the generalization performance benefits of pretraining and freezing the encoder on perception tasks (esp. going from 16% to 62% of completed episodes in the new town and weather dynamic navigation scenario).
# Weaknesses
## Writing
I have to start with the most obvious one. The paper is littered with typos and grammatical errors (way too many to list). For instance, the usage of "the" and "a" is almost non-existent. Overall, the paper is really hard to read and needs a thorough pass of proof-reading and editing. Also, please remove the acknowledgments section: I think it is borderline breaking the double-blind submission policy (I don't know these persons, but if I did that would be a breach of ICLR submission policy). Furthermore, I think its contents are not very professional for a submission at a top international academic venue, but that is just my opinion.
## Novelty
This is the main weakness for me. The architecture is very close to at least the following works:
- Xu, H., Gao, Y., Yu, F. and Darrell, T., End-to-end learning of driving models from large-scale video datasets (CVPR'17): this reference is missing from the paper, whereas it is very closely related, as it also shows the benefit of a segmentation decoder on top of a shared encoder for end-to-end driving (calling it privileged training);
- Codevilla et al's Conditional Imitation Learning (ICRA'18): the only novelty in the current submission w.r.t. CIL is the addition of the depth and segmentation decoders;
- Müller, M., Dosovitskiy, A., Ghanem, B., & Koltun, V., Driving Policy Transfer via Modularity and Abstraction (CoRL'18): the architecture also uses a shared perception module and segmentation (although in a mediated way instead of auxiliary task) to show better generalization performance (including from sim to real).
Additional missing related works include:
- Kim, J. and Canny, J.F., Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention (ICCV'17): uses post-hoc attention interpretation of "black box" end-to-end networks;
- Sauer, A., Savinov, N. and Geiger, A., Conditional Affordance Learning for Driving in Urban Environments (CoRL'18): also uses a perception module in the middle of the CIL network showing better generalization performance in CARLA (although a bit lower than the results in the current submission).
- Pomerleau, D.A., Alvinn: An autonomous land vehicle in a neural network (NIPS'89): the landmark paper for end-to-end driving with neural networks!
## Insights / significance
In light of the aforementioned prior art, I believe the claims are correct but already reported in other publications in the community (cf. references above). In particular, the proposed approach uses a lot more strongly labeled data (depth and semantic segmentation supervision in a dataset of 40,000 images) than the competing approaches mentioned above. For instance, the modular pipeline in the original CARLA paper uses only 2,500 labeled images, and I am sure its performance would be vastly improved with 40,000 images, but this is not evaluated, hence the comparison in Table 1 being unfair in my opinion. This matters because the encoder in the proposed method is frozen after training on the perception tasks, and the main point of the experiments is to convince that it results in a great (fixed) intermediate representation, which is in line with the aforementioned works doing mediated perception for driving.
The fine-tuning experiments are also confirming what is know in the litterature, namely that simple fine-tuning can lead to catastrophic forgetting (Table 3).
Finally, the qualitative evaluation of failure cases (5.3) leads to a trivial conclusion: a modular approach is indeed more interpretable than an end-to-end one. This is actually by design and the main advocated benefit of modular approaches: failure in the downstream perception module yields failure in the upstream driving module that builds on top of it. As the perception module is, by design, outputting a human interpretable representation (e.g., a semantic segmentation map), then this leads to better interpretation overall.
## Reproducibility
There are not enough details in section 3.1 about the deep net architecture to enable re-implementation ("structure similar to SegNet", no detailed description of the number of layers, non-linearities, number of channels, etc).
Will the authors release the perception training dataset collected in CARLA described in Section 4.2?
# Recommendation
Although the results of the proposed multi-task network on the CARLA driving benchmark are good, it is probably due to using almost two orders of magnitude more labeled data for semantic segmentation and depth prediction than prior works (which is only practical because the experiments are done in simulation). Prior work has confirmed that combining perception tasks like semantic segmentation with end-to-end driving networks yield better performance, including using a strongly related approach (Xu et al). In addition to the lack of novelty or new insights, the writing needs serious attention.
For these reasons, I believe this paper is not suitable for publication at ICLR. |
ICLR | Title
Intervention-based Recurrent Casual Model for Non-stationary Video Causal Discovery
Abstract
Nonstationary causal structures are prevalent in real-world physical systems. For example, the stacked blocks interact until they fall apart, while the billiard balls move independently until they collide. However, most video causal discovery methods can not discover such nonstationary casual structures due to the lack of modeling for the instantaneous change and the dynamics of the causal structure. In this work, we propose the Intervention-based Recurrent Casual Model (IRCM) for nonstationary video casual discovery. First, we extend the existing intervention-based casual discovery framework for videos to formulate the instantaneous change of the causal structure in a principled manner. Then, we use a recurrent model to sequentially predict the causal structure model based on previous observations to capture the nonstationary dynamic of the causal structure. We evaluate our method on two popular physical system simulation datasets with various types of multi-body interactions. Experiments show that the proposed IRCM achieves the state-of-the-art performance on both the counterfactual reasoning and future forecasting tasks.
1 INTRODUCTION
Causal reasoning from visual input is essential for intelligence systems in understanding the complex mechanisms in the physical world. For instance, autonomous vehicles need to infer the unseen causal structures on the road that drives the state evolution of other agents across time to anticipate future events better accordingly. One main obstacle in discovering such causal structures is the dynamic nature of events. In Figure 1, we illustrate the varying casual relationship in a simple multi-body system where the stacked blocks fall to the ground. In nonstationary video sequences, the causal structure can have abrupt changes and/or long-term dependencies, posing challenges for casual graphical models (CGM).
For the first challenge, most CGMs in video causal understanding can not handle abrupt causal relationship changes. Li et al. (2020) (VCDN, Figure 2a) partially address this issue by learning a stationary causal summary graph, where causal structures are learned but fixed throughout the video. Zheng et al. (2018) (DYNOTEARS, Figure 2b) relaxed such fixed structure settings by assuming a
stationary order for the period bigger than 1. On the other hand, Brouillard et al. (2020) (DCDI, Figure 2c) recently proposes a differentiable causal model for a spatial graph to naturally capture the abrupt change of probability distributions during interventions. In this work, we naturally extend the intervention-based causal model to the graph with time-leg edges in videos, i.e., current objects’ states are fully determined by previous states (Figure 2d).
For the second challenge, most CGMs in video causal understanding purely depend on the object state observations. That is the causal graph at time t is conditionally independent from the causal graph at time t − 1 given the object states’ observations. Illustrated in Figure 1, CGMs that can be represented as graphs can be modeled as a trajectory in the nonstationary video. In this work, we adopt a recurrent network to sequentially predict CGM to model the trajectories.
Based on the intuitions above, we propose the Intervention-based Recurrent Casual Model (IRCM) to better capture the dynamics in nonstationary videos. As the ground truth CGMs are often not directly measurable, we adopt two popular downstream tasks to benchmark the efficacy of the proposed model: counterfactual reasoning and future state forecasting. Deducing the alternative results countering the reality over the discovered CGM can directly express the impacts of causality. Also, the causal knowledge endows better insights into which factors affect the target variable and how to manipulate the system properly.
We summarize the contribution of this work as follows:
• We introduce the IRCM model to extend the previous intervention-based causal discovery framework to nonstationary video sequences.
• We propose to use recurrent networks to capture the long-term trajectory of Causal Graph Models (CGM) and provide optimization solution to train recurrent networks together with downstream causal models.
• We achieve state-of-the-art performance on two downstream tasks: counterfactual reasoning and future forecasting on two standard benchmark datasets (CoPhy (Baradel et al., 2020), Fabric Manipulation (Brouillard et al., 2020)) by showing an averaged improvement of 11% across 9 metrics.
2 RELATED WORK
Causal Discovery of Stationary Models. Given the input time-series data, the goal is to uncover one fixed directed acyclic graph (DAG), where edges represent the direct causal relationships among variables. There are two main approaches: observation-based and intervention-based. The observation-based approach fully relies on the passive observation of the input system. Constraintbased methods rely on conditional independence tests as constraint-satisfaction to recover MarkovEquivalent Graphs (Spirtes et al., 2000; Entner & Hoyer, 2010; Colombo et al., 2011). Score-based methods assign a score to each DAG, and perform searching in this score space (Chickering, 2002; Zheng et al., 2018). The third class of methods exploits such asymmetries or causal footprints to uniquely identify a DAG (Shimizu, 2014; Zhang & Hyvärinen, 2009).
In practice, domain experts may design interventional experiments and collect additional data of the input system. The intervention-based approach aims to combine such interventional data with the observational data for a better identifiability of the causal structure (Eberhardt, 2012; Eberhardt et al., 2012). However, many of current approaches (Hyttinen et al., 2013; Ghassami et al., 2018b; Kocaoglu et al., 2017; Wang et al., 2017; Shanmugam et al., 2015; Peters et al., 2016; Rothenhäusler et al., 2015; Ke et al., 2019) either assume full knowledge of the intervention, make strong assumptions about the model class, or have scalability limitations. Recently, Brouillard et al. (2020) utilizes the continuous-constrained framework to model the interventions with neural network models. In contrast, our proposed method aims to uncover nonstationary causal structures.
Causal Discovery of Nonstationary Models. To extend to nonstationary data, recent works discover causal models in each sliding window separately, and then compare and merge them. Adams & MacKay (2007) explicitly detect the change points and divide the time series into stationary processes. To implicitly model the change of the causal model, Huang et al. (2015) assume certain smoothness properties and Zhang et al. (2017) use kernel distribution embeddings to describe shifting probabilistic distributions. Later, the problem was reformulated with the online parameter learning framework (Song et al., 2009; Xing et al., 2010). To tackle the varying instantaneous causal relations, both linear (Ghassami et al., 2018a; Huang et al., 2019; Huang & Zhang, 2019; Huang et al., 2020a) and nonlinear (Huang et al., 2020b) causal models are proposed. Our proposed method treats the nonstationary changes of the system as interventions and re-purposes the intervention-based framework to discover time-varying causal graph structures.
Video Causal Discovery. The relevant literature in the computer vision community has accumulated several efforts to tackle down the challenges of video modeling and prediction (Ye et al., 2019; Hsieh et al., 2018; Yi et al., 2020). Nevertheless, one topic that had enjoyed recent success is reasoning objective dynamics in a video sequence. A line of research attempts to solve this task by modeling the correlations in a spatio-temporal context, such as (Yi* et al., 2020; Chen et al., 2021; Bakhtin et al., 2019; Qi et al., 2021; Zhang et al., 2021). However, focusing on modeling the dependencies substantially might not suffice to offer clear interpretations of object dynamics as we humans do. Addressing this issue, the authors of (Baradel et al., 2020) and (Li et al., 2020) try to make efforts to introduce causal knowledge (Schölkopf et al., 2021; Bengio et al., 2020; Runge et al., 2019) to this task. A few works adapt various topics into such a context. Whereas neither of them is able to fully uncover the causal structure underlying the video sequences.: CoPhyNet (Baradel et al., 2020) derives an alternative output based on a known causal graph; VCDN (Li et al., 2020) focus on recovering the stationary causal structures from the video. Instead, our proposed method apply the new intervention-based method to capture nonstationary causal structures.
3 METHODOLOGY
In this section, we present Intervention-based Recurrent Casual Model (IRCM) for non-stationary video causal discovery. We first give an overview of model architecture, as shown in Figure 3, then dive into two components of IRCM , Recurrent Network and Intervention-based Causal Model.
Recurrent Network Intervention-based Causal Model
Sample
3.1 PROBLEM FORMULATION
We factorize the joint probability of a temporal sequence into a sequential form:
p(x1:T ; θ) = p(x1; θ) T∏ t=2 p(xt|x1:t−1; θ), (1)
where θ is the model parameters to learn. This formulation makes it easy to do future forecasting by conditioning any unknown xt on observed or previously predicted history x1:t−1. For simplicity, we decode multiple frames in an autoregressive way, i.e., at each timestep, we predict x̂t as the mode of p(xt|x1:t−1; θ) and do further prediction conditioning on this prediction. Furthermore, we decompose the density function into Recurrent Network (RN) and Interventionbased Causal Model (ICM) by:
fθ(x t|x1:t−1; θ) = fICM(xt|Mt, It,x1:t−1; θICM) (2) Mt, It ∼ Bern(αt, βt) (3) αt, βt = RN(x1:t−1; θRN) (4)
In this way, we extend the framework of Continuous constrained optimization for structure learning to sequential data.
3.2 MODEL DESIGNS
Intervention-based Casual Model. Formally, given the observed d agents in the scene from time 1 to T , a joint probability distribution f(x) depict their state through time. In the context of Causal Graph Model (CGM) (Pearl et al., 2016), a directed acyclic graph (DAG) G with dT nodes defines f(x), where node xtj is associates with agent j at time step t. Directed edges represents causal relationships. The distribution of agent states at time t can be factorized as:
f(xt|x1:t−1; θ) = d∏ j=1 f(xtj |Pa(xtj); θ), (5)
where Pa(xtj) pertains to the set of parent nodes of x t j in G. Eq. 5 implicitly hypothesizes the causal sufficiency (Peters et al., 2017), i.e., our work does not involve any hidden confounding elements. Also, we neither consider the instantaneous edges nor edges that go back in time in this work. Simply put, Pa(xtj) ⊆ {xij}i<t. This feature makes our causal graph fully identifiable in the context of video sequence as Li et al. (2020).
Eq. 5 allows us to swap f(xtj |Pa(xtj)) with another conditional distribution, which is called interventions. One intervention target set I ⊆ V is a subset of graph nodes where interventions are
exerted. We consider an intervention family I = {Ik}Kk=1. In particular, I1 = ∅ denotes the observed distribution. We furthur use Itk to denote intervened nodes at time t in the kth intervention family. Given an interventional family Ik, we formalize the intervened distribution at time t by:
f (k)(xt) = ∏ j 6∈Itk f (1)(xtj |Pa(xtj)) ∏ j∈Itk f (k)(xtj |Pa(xtj)). (6)
In our case, we use k = 2, assuming only one intervention family. Following Brouillard et al. (2020), we use neural networks (NN) to output the parameters of density function f̃ , e.g., Gaussian.
f (1) = f̃(.; NN(., φtj)), f (2) = f̃(.; NN(., ψtj)), (7)
where φ and ψ are parameters for the observational and interventional density function respectively. Thus, Eq. 6 can be written as:
fICM(x t|Mt, It,x1:t−1; θICM) = ∏ j 6∈It2 f̃ ( xtj ; NN(Mtj x;φtj) ) ∏ j∈It2 f̃ ( xtj ; NN(Mtj x;ψtj) ) , (8)
whereMtj ∈ {0, 1}dT is a binary vector indicating the parents of xtj and is the Hadamard product. In specific, two separate neural networks with identical architecture are used to predict mean vectors and diagonal covariance matrices to parameterize the multivariate Gaussian distributions for our f̃ ,
µt,Σt = NN(Mt x;φt), (9) µ̃t, Σ̃t = NN(Mt x;ψt), (10)
for observational and interventional distributions respectively. In summary, θICM = {φ, ψ}. Causal Graph Sampling. Direct prediction of graph structure in its binary from Mt and It is difficult and can lead to mode collapse. Following DCDI (Brouillard et al., 2020), we choose to capture it through multivariate Bernoulli distributions.
In specific, an upstream module Recurrent Network (RN) will predict real matrix αt and real vector βt, which are of the same shape asMt and It. Then we sample binary values in the following way:
Mt ∼ Bern(αt), (11) It ∼ Bern(βt). (12)
All elements are mutually independent. Optimization difficulty incurred by sampling process is solved by Straight-Through Gumbel estimator (Jang et al., 2016; Maddison et al., 2016).
Recurrent Network. In the Recurrent Network (RN), we are concerned with modeling the distribution of a graph structure given previous observations x1:t−1. We consider all time-lagged but not instantaneous causal relations in this model. Thus at time step t, we need to predict graph structure with all its previous t − 1 frames. We group these graphs into Mt ∈ {0, 1}d2×(t−1). For intervention, it is a vector It ∈ {0, 1}d.
ht = fGRU(h t−1,xt; θGRU) (13)
αt, βt = fMLP(h t; θMLP) (14)
To model the non-stationary nature of real-world physical systems, we use an two-layer Gated Recurrent Unit (GRU) (Chung et al., 2014) to model temporal dependencies and an MLP to predict the likelihood of existing causal relations αt and successful intervention βt. In summary, θRN = {θGRU, θMLP}.
3.3 LEARNING AND INFERENCE
Learning. We do not have access to the ground-truth graph structure. This motivates us to follow DCDI (Brouillard et al., 2020), which serves as the pedestal of our work, to train IRCM in a manner of continuous constrained optimization problem. The core of our objective treats learning by maximizing the regularized log-likelihood in Eq. 8 conditioning on the object states :
L = ∑ k Ex∼Px log f(x)− ζ ∑ (j,t) ||Mtj ||0 − η ∑ t ||It||1
s.t. Tr(eσ(α t))− d = 0
(15)
ζ and η are hyperparameters to control the sparsity of causal graphs and intervention sets respectively. Because we do not consider instantaneous causal relations nor relations go back in time, the learnt graph is guaranteed to be a DAG. Thus, IRCM naturally meets the requirement of the acyclicity constraint Tr(eσ(α
t))− d = 0 (Zheng et al., 2018). In order to estimate the gradient of αt and βt with regard toL, we choose to follow DCDI (Brouillard et al., 2020) utilize the Straight-Through Gumbel estimator (Jang et al., 2016; Maddison et al., 2016). This is equivalent to using discrete Bernoulli samples during forward passing and Gumbel-Softmax samples during backpropagation.
Inference. During inference time, as shown in Figure 4, we use the observed and previously predicted sequence {x1:t0 , x̂t0+1:t−1} to predict the multivariate distribution of xt (t0 is the length of observed sequence). We then do a secondary optimization to predict x̂t:
x̂t = arg max xt
( f(xt|x1:t−1; θ ) (16)
= ∑
(Mt,It)
arg max xt
( f(xt|Mt, It; θICM) ) p(Mt, It|x1:t−1; θRN)
x̂tj = ∑
(Mt,It)
(µtj) δ(j 6∈It)(µ̃tj) δ(j∈It)p(Mt, It|x1:t−1; θRN), (17)
where δ(j ∈ It) is the delta function indicating if object i is in the intervention set It. In practice, we take the Monte Carlo approach to first sample (Mt, It) according to the distribution and average the predicted mean values from either the observation and the intervention set.
4 EXPERIMENT
4.1 EXPERIMENTAL SETUPS
Downstream tasks and Datasets. We conduct experiments to understand the efficacy of our proposed IRCM in terms of discovering the causal structure to estiamte the object dynamics across time. More specifically, the counterfactual reasoning and future forecasting in the video sequences are selected to demonstrate this point.
Task 1: Counterfactual Reasoning. This problem is formalized as follows (Baradel et al., 2020): During training, we first infer the causal structure upon a set of visual observations. The objective is to reason the counterfactual outcome given the modified initial object state. The Counterfactual Physics benchmark (CoPhy) (Baradel et al., 2020) dataset contains two types of sequences, observational and counterfactual. The latter sequence is built upon changing the initial object state from the observations with other factors ((such as inertia, gravity or friction)) untouched. CoPhy comprises three physical scenarios in total: BlockTowerCF, BallsCF and CollisionCF. Each scenario provides the 3D positions of all objects in the scene. BlockTowerCF also includes a binary label for stability.
Task 2: Future Forecasting. Future forecasting refers to discerning unknown object future given the observed histories. We use the Fabric Manipulation (FM) (Li et al., 2020) dataset for future forecasting task, where 2D coordinates from learned keypoints in the dynamics scene are provided.
Implementation Details. For both tasks, we use the same model architectures and the same settings for learning and inference. On each dataset, we directly use the extracted visual features from video frames in the previous state-of-the-art methods. Below are the details.
Visual Features. For observation xt, we use the extracted visual features from input videos to improve the model performance. For a fair comparison on CoPhy, we adopt the identical experimental protocols in (Baradel et al., 2020) to examine the generalizability of IRCM. We train and test with 4 objects on BlockTowerCF and BallsCF. The experiments on CollisionCF utilize all types of objects (spheres and cylinders) for both training and test. Moreover, following the settings opted in (Li et al., 2020), we first extract the 2D positions of key points from a pretrained DNN-based mechanism (Kulkarni et al., 2019) to represent the fabrics. Our experiments proceeds by observe first 5 time steps and foresee object states for next 20 time steps for training, and forecast the forthcoming 5 steps upon previous 5 steps for test. We first encode these location information with an MLP as the object states for our model.
Model Architectures. We append two independent three-layer MLPs on a two-layer GRU to predict both αt and βt. At the time instance τ , ατ is then reshaped to a set of d×d matrices forMt. Notably, we zero-padded these matrices to ensure there exists t−1 individual matrix in total per time instance for backpropagation. For the faster learning convergence, we place an instance normalization layer before each ReLU activation in the MLP model and use the sigmoid activation for the final output to make it a probability value.
Learning and Inference. In our experiments, RMSProp optimizer (Goodfellow et al., 2016) are employed with the learning rate initialized at 8 × 10−5 . Our implementation uses PyTorch. The experiments are executed on four Nvidia GeForce TITAN XPs, with 48 GB of memory in total.
Evaluation Metrics. Since none of the aforementioned datasets provide annotations for the causal graphical model, we gauge model performance by the observed object dynamics which is generated from the unobserved causal structure. Thus the ideal metrics should rely on object states, i.e., coordinates and stability. In particular, we aim to understand how close the outcomes can approximate the ground truth. To this end, we calculate the mean square error (MSE) and the negative log-likelihood (NLL) (Ivanovic & Pavone, 2019) on coordinates of objects between ground-truth and prediction. NLL is the average negative log-likelihood between a ground truth trajectory distribution determined by a kernel density estimate and the predicted trajectory. In addition, the stability classification accuracy are used for our experiment on BlockTowerCF. Lower NLL and MSE and higher accuracy are preferred.
4.2 BENCHMARK RESULTS
As per comparing methods, we are primarily interested in assessing our IRCM versus two leading studies on estimating agent states in a video sequence in the context of learning CGM. More specifically, CoPhyNet (Baradel et al., 2020), which achieves cutting-edge results on the CoPhy benchmark and the VCDN framework (Li et al., 2020), which performs best on FM, are selected.
CoPhyNet summarizes the problem with a given causal structure to handle the object dynamics over time and approache object interactions with fully-connected graph convolution (Kipf & Welling, 2016; Battaglia et al., 2018). VCDN provides a model that infers a summary graph consists of timelagged causal relations as shown in Figure 2. To the best of our knowledge. these two methods are the most relevant ones to ours.
We train our algorithm on CoPhy by the exact training objective Eq. 15 on BallsCF, CollisionCF, and FM. For BlockTowerCF, we also include the stability classification term for a fair comparison:
L = ∑ k Ex∼Px log fk(x)− ζ∑ (j,t) ||Mtj ||0 + η ∑ t ||It||1 + CE(Ŝt,St) , (18) where the CE term is the cross entropy between predicted and ground-truth stability. We forward the predicted locations and learntMt to a pre-trained GCN for the stability estimation. It can be seen in Table 1 that out model consistently beat baselines. It demonstrate the necessity of capturing nonstationary causal structures and intervention-based causal discovery.
4.3 ABLATION STUDIES
The proposed IRCM has two main components: Intervention-based Causal Model and Recurrent Network. Below, we justify their design choices with the following ablation studies (Table 2).
Intervention-based Causal Model (ICM). The ICM model relies on the causal DAG structureM and the intervention set I . Below, we demonstrate their necessities by the ablation studies.
Importance of Causal Graphical Model (M, I). IRCM w/oM, I treats the counterfactual reasoning task as future forecasting on both sequences by not transferring the learnt causal structure from observatinoal to counterfactual sequences. We can see in Table 2 that this significantly hurt the performance of IRCM . In fact, IRCM w/o M, I shows the worst scores on both metrics. The comparisons of those values against other methods overwhelmingly demonstrate the necessity and merit to take the causal structure into account for video future forecasting.
Importance of Intervention (I). We justify the advantages of using interventional distribution to discover the causal structure in a video sequence over IRCM w/o I , which directly approximates Eq. 5 from the observations. We can observe the large performance gap between IRCM w/o I and IRCM , demonstrating the impacts of interventions concerning learning the causal structure.
Importance of long-termM. IRCM-markov serves to verify the advantages of IRCM treatingM as a d2×(t−1) matix. The scores of IRCM in Table 2 considerably exceed IRCM-localM. We attribute this to the property of IRCM evidently offering better capability to learn the causal relationships than setting t = 2. The advantages of IRCM also convey the message that the impacts of the agent states in several previous time instances can impact on the current agent states. Additionally, the results favor IRCM over CoPhyNet (Baradel et al., 2020) can be attributed to a similar reason.
Recurrent Network (RN). Instead of the sequential modeling of the causal graphical structures with RN, we can predict a single structure or a sequence of structures that are temporally independent.
Importance of Nonstationary Modeling. IRCM-stationary assumes an invariant causal structure over time, thus shares the similar idea with V-CDN (Li et al., 2020), i.e., we assume that the learned
(Mt, It) and the weight of NN remain static. As shown in Table 2, IRCM significantly outperforms IRCM-stationary, fitting better the time-varying structures in the video sequences. This result emphasizes the importance of considering nonstationary structures in temporal modeling.
Importance of Sequential Modeling. We evaluate the advantages of extrapolatingMt through our RN against IRCM-indep that learns Mt independently at each time step. Table 2 suggest that our IRCM significantly outperforms IRCM-indep, demonstrating the advantages of the sequential modeling of causal structures.
5 CONCLUSION
In this paper, we propose an intervention-based recurrent casual model for video causal discovery. IRCM differs from works the literature in that it introduces the interventions to discover the causal structure for understanding the object dynamics in video sequences. At its core, we introduce a recurrent network to model the interventional distributions. This formulation allows us to grasp the time-varying property that widely exists in video sequences. Experiment results justify that our IRCM delivers better performance in both counterfactual reasoning and future forecasting compared with prior works. One direction is to loose the sufficiency assumption and involve the confounding elements to our framework to enable discovering the causal relationships in real-world applications. | 1. What is the focus and contribution of the paper on causal discovery methods for videos?
2. What are the strengths of the proposed approach, particularly in extending previous frameworks to non-stationary video sequences?
3. What are the weaknesses of the paper regarding its experimental evaluation and comparison with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any minor comments or suggestions that can improve the paper without affecting its overall rating? | Summary Of The Paper
Review | Summary Of The Paper
The paper extends the existing methods for learning causal discovery methods for videos where's the underlying causal structure is changing at each time-step. The paper evaluates the proposed method on two video prediction datasets. Experimental results show that the proposed method achieves better performance as compared to the various baselines considered.
Review
Strengths:
The paper is well written and tackles an interesting problem.
The authors extend the previous intervention-based causal discovery framework to non-stationary video sequences.
Weakness:
It may be more useful to evaluate the proposed method as to how it performs on the downstream tasks (on more complex datasets) as compared to multi-step prediction methods (for example. following similar setup as in [1]).
Mean and variance across different seeds for all the results in table 1 and table 2 are not mentioned.
[1] Attention over learned object embeddings enables complex visual reasoning, https://arxiv.org/abs/2012.08508
Related Work:
Some of the related work is not mentioned. For example: RIMs/NPS consists of an ensemble of "variables" interacting with each other via self-attention. The "graph" is dynamic and dependent upon the time-step.
[1] RIMs, https://arxiv.org/abs/1909.10893 [2] NPS, https://arxiv.org/abs/2103.01937
Minor Comments: These comments do not affect my rating.
In the introduction, the paper writes " Brouillard et al. (2020) (DCDI recently proposes a differentiable causal model for a spatial graph to naturally capture the abrupt change of probability distributions during interventions". This is slightly misleading. DCDI builds upon SDI (Ke et. al, 2019) [1] which authors have already cited in the paper. Similarly, the idea of sampling from bernoulli distribution was used in SDI for learning the underlying causal structure. |
ICLR | Title
Intervention-based Recurrent Casual Model for Non-stationary Video Causal Discovery
Abstract
Nonstationary causal structures are prevalent in real-world physical systems. For example, the stacked blocks interact until they fall apart, while the billiard balls move independently until they collide. However, most video causal discovery methods can not discover such nonstationary casual structures due to the lack of modeling for the instantaneous change and the dynamics of the causal structure. In this work, we propose the Intervention-based Recurrent Casual Model (IRCM) for nonstationary video casual discovery. First, we extend the existing intervention-based casual discovery framework for videos to formulate the instantaneous change of the causal structure in a principled manner. Then, we use a recurrent model to sequentially predict the causal structure model based on previous observations to capture the nonstationary dynamic of the causal structure. We evaluate our method on two popular physical system simulation datasets with various types of multi-body interactions. Experiments show that the proposed IRCM achieves the state-of-the-art performance on both the counterfactual reasoning and future forecasting tasks.
1 INTRODUCTION
Causal reasoning from visual input is essential for intelligence systems in understanding the complex mechanisms in the physical world. For instance, autonomous vehicles need to infer the unseen causal structures on the road that drives the state evolution of other agents across time to anticipate future events better accordingly. One main obstacle in discovering such causal structures is the dynamic nature of events. In Figure 1, we illustrate the varying casual relationship in a simple multi-body system where the stacked blocks fall to the ground. In nonstationary video sequences, the causal structure can have abrupt changes and/or long-term dependencies, posing challenges for casual graphical models (CGM).
For the first challenge, most CGMs in video causal understanding can not handle abrupt causal relationship changes. Li et al. (2020) (VCDN, Figure 2a) partially address this issue by learning a stationary causal summary graph, where causal structures are learned but fixed throughout the video. Zheng et al. (2018) (DYNOTEARS, Figure 2b) relaxed such fixed structure settings by assuming a
stationary order for the period bigger than 1. On the other hand, Brouillard et al. (2020) (DCDI, Figure 2c) recently proposes a differentiable causal model for a spatial graph to naturally capture the abrupt change of probability distributions during interventions. In this work, we naturally extend the intervention-based causal model to the graph with time-leg edges in videos, i.e., current objects’ states are fully determined by previous states (Figure 2d).
For the second challenge, most CGMs in video causal understanding purely depend on the object state observations. That is the causal graph at time t is conditionally independent from the causal graph at time t − 1 given the object states’ observations. Illustrated in Figure 1, CGMs that can be represented as graphs can be modeled as a trajectory in the nonstationary video. In this work, we adopt a recurrent network to sequentially predict CGM to model the trajectories.
Based on the intuitions above, we propose the Intervention-based Recurrent Casual Model (IRCM) to better capture the dynamics in nonstationary videos. As the ground truth CGMs are often not directly measurable, we adopt two popular downstream tasks to benchmark the efficacy of the proposed model: counterfactual reasoning and future state forecasting. Deducing the alternative results countering the reality over the discovered CGM can directly express the impacts of causality. Also, the causal knowledge endows better insights into which factors affect the target variable and how to manipulate the system properly.
We summarize the contribution of this work as follows:
• We introduce the IRCM model to extend the previous intervention-based causal discovery framework to nonstationary video sequences.
• We propose to use recurrent networks to capture the long-term trajectory of Causal Graph Models (CGM) and provide optimization solution to train recurrent networks together with downstream causal models.
• We achieve state-of-the-art performance on two downstream tasks: counterfactual reasoning and future forecasting on two standard benchmark datasets (CoPhy (Baradel et al., 2020), Fabric Manipulation (Brouillard et al., 2020)) by showing an averaged improvement of 11% across 9 metrics.
2 RELATED WORK
Causal Discovery of Stationary Models. Given the input time-series data, the goal is to uncover one fixed directed acyclic graph (DAG), where edges represent the direct causal relationships among variables. There are two main approaches: observation-based and intervention-based. The observation-based approach fully relies on the passive observation of the input system. Constraintbased methods rely on conditional independence tests as constraint-satisfaction to recover MarkovEquivalent Graphs (Spirtes et al., 2000; Entner & Hoyer, 2010; Colombo et al., 2011). Score-based methods assign a score to each DAG, and perform searching in this score space (Chickering, 2002; Zheng et al., 2018). The third class of methods exploits such asymmetries or causal footprints to uniquely identify a DAG (Shimizu, 2014; Zhang & Hyvärinen, 2009).
In practice, domain experts may design interventional experiments and collect additional data of the input system. The intervention-based approach aims to combine such interventional data with the observational data for a better identifiability of the causal structure (Eberhardt, 2012; Eberhardt et al., 2012). However, many of current approaches (Hyttinen et al., 2013; Ghassami et al., 2018b; Kocaoglu et al., 2017; Wang et al., 2017; Shanmugam et al., 2015; Peters et al., 2016; Rothenhäusler et al., 2015; Ke et al., 2019) either assume full knowledge of the intervention, make strong assumptions about the model class, or have scalability limitations. Recently, Brouillard et al. (2020) utilizes the continuous-constrained framework to model the interventions with neural network models. In contrast, our proposed method aims to uncover nonstationary causal structures.
Causal Discovery of Nonstationary Models. To extend to nonstationary data, recent works discover causal models in each sliding window separately, and then compare and merge them. Adams & MacKay (2007) explicitly detect the change points and divide the time series into stationary processes. To implicitly model the change of the causal model, Huang et al. (2015) assume certain smoothness properties and Zhang et al. (2017) use kernel distribution embeddings to describe shifting probabilistic distributions. Later, the problem was reformulated with the online parameter learning framework (Song et al., 2009; Xing et al., 2010). To tackle the varying instantaneous causal relations, both linear (Ghassami et al., 2018a; Huang et al., 2019; Huang & Zhang, 2019; Huang et al., 2020a) and nonlinear (Huang et al., 2020b) causal models are proposed. Our proposed method treats the nonstationary changes of the system as interventions and re-purposes the intervention-based framework to discover time-varying causal graph structures.
Video Causal Discovery. The relevant literature in the computer vision community has accumulated several efforts to tackle down the challenges of video modeling and prediction (Ye et al., 2019; Hsieh et al., 2018; Yi et al., 2020). Nevertheless, one topic that had enjoyed recent success is reasoning objective dynamics in a video sequence. A line of research attempts to solve this task by modeling the correlations in a spatio-temporal context, such as (Yi* et al., 2020; Chen et al., 2021; Bakhtin et al., 2019; Qi et al., 2021; Zhang et al., 2021). However, focusing on modeling the dependencies substantially might not suffice to offer clear interpretations of object dynamics as we humans do. Addressing this issue, the authors of (Baradel et al., 2020) and (Li et al., 2020) try to make efforts to introduce causal knowledge (Schölkopf et al., 2021; Bengio et al., 2020; Runge et al., 2019) to this task. A few works adapt various topics into such a context. Whereas neither of them is able to fully uncover the causal structure underlying the video sequences.: CoPhyNet (Baradel et al., 2020) derives an alternative output based on a known causal graph; VCDN (Li et al., 2020) focus on recovering the stationary causal structures from the video. Instead, our proposed method apply the new intervention-based method to capture nonstationary causal structures.
3 METHODOLOGY
In this section, we present Intervention-based Recurrent Casual Model (IRCM) for non-stationary video causal discovery. We first give an overview of model architecture, as shown in Figure 3, then dive into two components of IRCM , Recurrent Network and Intervention-based Causal Model.
Recurrent Network Intervention-based Causal Model
Sample
3.1 PROBLEM FORMULATION
We factorize the joint probability of a temporal sequence into a sequential form:
p(x1:T ; θ) = p(x1; θ) T∏ t=2 p(xt|x1:t−1; θ), (1)
where θ is the model parameters to learn. This formulation makes it easy to do future forecasting by conditioning any unknown xt on observed or previously predicted history x1:t−1. For simplicity, we decode multiple frames in an autoregressive way, i.e., at each timestep, we predict x̂t as the mode of p(xt|x1:t−1; θ) and do further prediction conditioning on this prediction. Furthermore, we decompose the density function into Recurrent Network (RN) and Interventionbased Causal Model (ICM) by:
fθ(x t|x1:t−1; θ) = fICM(xt|Mt, It,x1:t−1; θICM) (2) Mt, It ∼ Bern(αt, βt) (3) αt, βt = RN(x1:t−1; θRN) (4)
In this way, we extend the framework of Continuous constrained optimization for structure learning to sequential data.
3.2 MODEL DESIGNS
Intervention-based Casual Model. Formally, given the observed d agents in the scene from time 1 to T , a joint probability distribution f(x) depict their state through time. In the context of Causal Graph Model (CGM) (Pearl et al., 2016), a directed acyclic graph (DAG) G with dT nodes defines f(x), where node xtj is associates with agent j at time step t. Directed edges represents causal relationships. The distribution of agent states at time t can be factorized as:
f(xt|x1:t−1; θ) = d∏ j=1 f(xtj |Pa(xtj); θ), (5)
where Pa(xtj) pertains to the set of parent nodes of x t j in G. Eq. 5 implicitly hypothesizes the causal sufficiency (Peters et al., 2017), i.e., our work does not involve any hidden confounding elements. Also, we neither consider the instantaneous edges nor edges that go back in time in this work. Simply put, Pa(xtj) ⊆ {xij}i<t. This feature makes our causal graph fully identifiable in the context of video sequence as Li et al. (2020).
Eq. 5 allows us to swap f(xtj |Pa(xtj)) with another conditional distribution, which is called interventions. One intervention target set I ⊆ V is a subset of graph nodes where interventions are
exerted. We consider an intervention family I = {Ik}Kk=1. In particular, I1 = ∅ denotes the observed distribution. We furthur use Itk to denote intervened nodes at time t in the kth intervention family. Given an interventional family Ik, we formalize the intervened distribution at time t by:
f (k)(xt) = ∏ j 6∈Itk f (1)(xtj |Pa(xtj)) ∏ j∈Itk f (k)(xtj |Pa(xtj)). (6)
In our case, we use k = 2, assuming only one intervention family. Following Brouillard et al. (2020), we use neural networks (NN) to output the parameters of density function f̃ , e.g., Gaussian.
f (1) = f̃(.; NN(., φtj)), f (2) = f̃(.; NN(., ψtj)), (7)
where φ and ψ are parameters for the observational and interventional density function respectively. Thus, Eq. 6 can be written as:
fICM(x t|Mt, It,x1:t−1; θICM) = ∏ j 6∈It2 f̃ ( xtj ; NN(Mtj x;φtj) ) ∏ j∈It2 f̃ ( xtj ; NN(Mtj x;ψtj) ) , (8)
whereMtj ∈ {0, 1}dT is a binary vector indicating the parents of xtj and is the Hadamard product. In specific, two separate neural networks with identical architecture are used to predict mean vectors and diagonal covariance matrices to parameterize the multivariate Gaussian distributions for our f̃ ,
µt,Σt = NN(Mt x;φt), (9) µ̃t, Σ̃t = NN(Mt x;ψt), (10)
for observational and interventional distributions respectively. In summary, θICM = {φ, ψ}. Causal Graph Sampling. Direct prediction of graph structure in its binary from Mt and It is difficult and can lead to mode collapse. Following DCDI (Brouillard et al., 2020), we choose to capture it through multivariate Bernoulli distributions.
In specific, an upstream module Recurrent Network (RN) will predict real matrix αt and real vector βt, which are of the same shape asMt and It. Then we sample binary values in the following way:
Mt ∼ Bern(αt), (11) It ∼ Bern(βt). (12)
All elements are mutually independent. Optimization difficulty incurred by sampling process is solved by Straight-Through Gumbel estimator (Jang et al., 2016; Maddison et al., 2016).
Recurrent Network. In the Recurrent Network (RN), we are concerned with modeling the distribution of a graph structure given previous observations x1:t−1. We consider all time-lagged but not instantaneous causal relations in this model. Thus at time step t, we need to predict graph structure with all its previous t − 1 frames. We group these graphs into Mt ∈ {0, 1}d2×(t−1). For intervention, it is a vector It ∈ {0, 1}d.
ht = fGRU(h t−1,xt; θGRU) (13)
αt, βt = fMLP(h t; θMLP) (14)
To model the non-stationary nature of real-world physical systems, we use an two-layer Gated Recurrent Unit (GRU) (Chung et al., 2014) to model temporal dependencies and an MLP to predict the likelihood of existing causal relations αt and successful intervention βt. In summary, θRN = {θGRU, θMLP}.
3.3 LEARNING AND INFERENCE
Learning. We do not have access to the ground-truth graph structure. This motivates us to follow DCDI (Brouillard et al., 2020), which serves as the pedestal of our work, to train IRCM in a manner of continuous constrained optimization problem. The core of our objective treats learning by maximizing the regularized log-likelihood in Eq. 8 conditioning on the object states :
L = ∑ k Ex∼Px log f(x)− ζ ∑ (j,t) ||Mtj ||0 − η ∑ t ||It||1
s.t. Tr(eσ(α t))− d = 0
(15)
ζ and η are hyperparameters to control the sparsity of causal graphs and intervention sets respectively. Because we do not consider instantaneous causal relations nor relations go back in time, the learnt graph is guaranteed to be a DAG. Thus, IRCM naturally meets the requirement of the acyclicity constraint Tr(eσ(α
t))− d = 0 (Zheng et al., 2018). In order to estimate the gradient of αt and βt with regard toL, we choose to follow DCDI (Brouillard et al., 2020) utilize the Straight-Through Gumbel estimator (Jang et al., 2016; Maddison et al., 2016). This is equivalent to using discrete Bernoulli samples during forward passing and Gumbel-Softmax samples during backpropagation.
Inference. During inference time, as shown in Figure 4, we use the observed and previously predicted sequence {x1:t0 , x̂t0+1:t−1} to predict the multivariate distribution of xt (t0 is the length of observed sequence). We then do a secondary optimization to predict x̂t:
x̂t = arg max xt
( f(xt|x1:t−1; θ ) (16)
= ∑
(Mt,It)
arg max xt
( f(xt|Mt, It; θICM) ) p(Mt, It|x1:t−1; θRN)
x̂tj = ∑
(Mt,It)
(µtj) δ(j 6∈It)(µ̃tj) δ(j∈It)p(Mt, It|x1:t−1; θRN), (17)
where δ(j ∈ It) is the delta function indicating if object i is in the intervention set It. In practice, we take the Monte Carlo approach to first sample (Mt, It) according to the distribution and average the predicted mean values from either the observation and the intervention set.
4 EXPERIMENT
4.1 EXPERIMENTAL SETUPS
Downstream tasks and Datasets. We conduct experiments to understand the efficacy of our proposed IRCM in terms of discovering the causal structure to estiamte the object dynamics across time. More specifically, the counterfactual reasoning and future forecasting in the video sequences are selected to demonstrate this point.
Task 1: Counterfactual Reasoning. This problem is formalized as follows (Baradel et al., 2020): During training, we first infer the causal structure upon a set of visual observations. The objective is to reason the counterfactual outcome given the modified initial object state. The Counterfactual Physics benchmark (CoPhy) (Baradel et al., 2020) dataset contains two types of sequences, observational and counterfactual. The latter sequence is built upon changing the initial object state from the observations with other factors ((such as inertia, gravity or friction)) untouched. CoPhy comprises three physical scenarios in total: BlockTowerCF, BallsCF and CollisionCF. Each scenario provides the 3D positions of all objects in the scene. BlockTowerCF also includes a binary label for stability.
Task 2: Future Forecasting. Future forecasting refers to discerning unknown object future given the observed histories. We use the Fabric Manipulation (FM) (Li et al., 2020) dataset for future forecasting task, where 2D coordinates from learned keypoints in the dynamics scene are provided.
Implementation Details. For both tasks, we use the same model architectures and the same settings for learning and inference. On each dataset, we directly use the extracted visual features from video frames in the previous state-of-the-art methods. Below are the details.
Visual Features. For observation xt, we use the extracted visual features from input videos to improve the model performance. For a fair comparison on CoPhy, we adopt the identical experimental protocols in (Baradel et al., 2020) to examine the generalizability of IRCM. We train and test with 4 objects on BlockTowerCF and BallsCF. The experiments on CollisionCF utilize all types of objects (spheres and cylinders) for both training and test. Moreover, following the settings opted in (Li et al., 2020), we first extract the 2D positions of key points from a pretrained DNN-based mechanism (Kulkarni et al., 2019) to represent the fabrics. Our experiments proceeds by observe first 5 time steps and foresee object states for next 20 time steps for training, and forecast the forthcoming 5 steps upon previous 5 steps for test. We first encode these location information with an MLP as the object states for our model.
Model Architectures. We append two independent three-layer MLPs on a two-layer GRU to predict both αt and βt. At the time instance τ , ατ is then reshaped to a set of d×d matrices forMt. Notably, we zero-padded these matrices to ensure there exists t−1 individual matrix in total per time instance for backpropagation. For the faster learning convergence, we place an instance normalization layer before each ReLU activation in the MLP model and use the sigmoid activation for the final output to make it a probability value.
Learning and Inference. In our experiments, RMSProp optimizer (Goodfellow et al., 2016) are employed with the learning rate initialized at 8 × 10−5 . Our implementation uses PyTorch. The experiments are executed on four Nvidia GeForce TITAN XPs, with 48 GB of memory in total.
Evaluation Metrics. Since none of the aforementioned datasets provide annotations for the causal graphical model, we gauge model performance by the observed object dynamics which is generated from the unobserved causal structure. Thus the ideal metrics should rely on object states, i.e., coordinates and stability. In particular, we aim to understand how close the outcomes can approximate the ground truth. To this end, we calculate the mean square error (MSE) and the negative log-likelihood (NLL) (Ivanovic & Pavone, 2019) on coordinates of objects between ground-truth and prediction. NLL is the average negative log-likelihood between a ground truth trajectory distribution determined by a kernel density estimate and the predicted trajectory. In addition, the stability classification accuracy are used for our experiment on BlockTowerCF. Lower NLL and MSE and higher accuracy are preferred.
4.2 BENCHMARK RESULTS
As per comparing methods, we are primarily interested in assessing our IRCM versus two leading studies on estimating agent states in a video sequence in the context of learning CGM. More specifically, CoPhyNet (Baradel et al., 2020), which achieves cutting-edge results on the CoPhy benchmark and the VCDN framework (Li et al., 2020), which performs best on FM, are selected.
CoPhyNet summarizes the problem with a given causal structure to handle the object dynamics over time and approache object interactions with fully-connected graph convolution (Kipf & Welling, 2016; Battaglia et al., 2018). VCDN provides a model that infers a summary graph consists of timelagged causal relations as shown in Figure 2. To the best of our knowledge. these two methods are the most relevant ones to ours.
We train our algorithm on CoPhy by the exact training objective Eq. 15 on BallsCF, CollisionCF, and FM. For BlockTowerCF, we also include the stability classification term for a fair comparison:
L = ∑ k Ex∼Px log fk(x)− ζ∑ (j,t) ||Mtj ||0 + η ∑ t ||It||1 + CE(Ŝt,St) , (18) where the CE term is the cross entropy between predicted and ground-truth stability. We forward the predicted locations and learntMt to a pre-trained GCN for the stability estimation. It can be seen in Table 1 that out model consistently beat baselines. It demonstrate the necessity of capturing nonstationary causal structures and intervention-based causal discovery.
4.3 ABLATION STUDIES
The proposed IRCM has two main components: Intervention-based Causal Model and Recurrent Network. Below, we justify their design choices with the following ablation studies (Table 2).
Intervention-based Causal Model (ICM). The ICM model relies on the causal DAG structureM and the intervention set I . Below, we demonstrate their necessities by the ablation studies.
Importance of Causal Graphical Model (M, I). IRCM w/oM, I treats the counterfactual reasoning task as future forecasting on both sequences by not transferring the learnt causal structure from observatinoal to counterfactual sequences. We can see in Table 2 that this significantly hurt the performance of IRCM . In fact, IRCM w/o M, I shows the worst scores on both metrics. The comparisons of those values against other methods overwhelmingly demonstrate the necessity and merit to take the causal structure into account for video future forecasting.
Importance of Intervention (I). We justify the advantages of using interventional distribution to discover the causal structure in a video sequence over IRCM w/o I , which directly approximates Eq. 5 from the observations. We can observe the large performance gap between IRCM w/o I and IRCM , demonstrating the impacts of interventions concerning learning the causal structure.
Importance of long-termM. IRCM-markov serves to verify the advantages of IRCM treatingM as a d2×(t−1) matix. The scores of IRCM in Table 2 considerably exceed IRCM-localM. We attribute this to the property of IRCM evidently offering better capability to learn the causal relationships than setting t = 2. The advantages of IRCM also convey the message that the impacts of the agent states in several previous time instances can impact on the current agent states. Additionally, the results favor IRCM over CoPhyNet (Baradel et al., 2020) can be attributed to a similar reason.
Recurrent Network (RN). Instead of the sequential modeling of the causal graphical structures with RN, we can predict a single structure or a sequence of structures that are temporally independent.
Importance of Nonstationary Modeling. IRCM-stationary assumes an invariant causal structure over time, thus shares the similar idea with V-CDN (Li et al., 2020), i.e., we assume that the learned
(Mt, It) and the weight of NN remain static. As shown in Table 2, IRCM significantly outperforms IRCM-stationary, fitting better the time-varying structures in the video sequences. This result emphasizes the importance of considering nonstationary structures in temporal modeling.
Importance of Sequential Modeling. We evaluate the advantages of extrapolatingMt through our RN against IRCM-indep that learns Mt independently at each time step. Table 2 suggest that our IRCM significantly outperforms IRCM-indep, demonstrating the advantages of the sequential modeling of causal structures.
5 CONCLUSION
In this paper, we propose an intervention-based recurrent casual model for video causal discovery. IRCM differs from works the literature in that it introduces the interventions to discover the causal structure for understanding the object dynamics in video sequences. At its core, we introduce a recurrent network to model the interventional distributions. This formulation allows us to grasp the time-varying property that widely exists in video sequences. Experiment results justify that our IRCM delivers better performance in both counterfactual reasoning and future forecasting compared with prior works. One direction is to loose the sufficiency assumption and involve the confounding elements to our framework to enable discovering the causal relationships in real-world applications. | 1. What is the main contribution of the paper regarding nonstationary video data discovery?
2. What are the strengths and weaknesses of the proposed Intervention-based Recurrent Causal Model (IRCM)?
3. Do you have any concerns about the paper's illustration of nonstationary causal structures?
4. How does the author address the instantaneous change in the causal structure?
5. How does the recurrent network accumulate previous information to predict the causal graph?
6. Can badly learned variables affect the model's performance? If so, how can this issue be fixed?
7. What is the significance of Figure 2e, and do its matrix representations correspond to the causal graphs in Figure 2d?
8. Are there inconsistencies in subscripts and superscripts throughout the paper?
9. What is the range of K of the intervention family, and why use k = 2?
10. How does the objective (15) work when all elements of Mt and It are assumed to be mutually independent?
11. Will discovering the causal structure over all variables (dimensions) of extracted features improve the model's performance?
12. Would more complex experiments on realistic datasets enhance the paper's verification?
13. What explicit assumptions does the paper lack regarding the underlying generative process of nonstationary data? | Summary Of The Paper
Review | Summary Of The Paper
The authors propose a new model, called Intervention-based Recurrent Causal Model (IRCM), to discover causal structures of nonstationary video data. The model consists of two modules: recurrent network (RN), which is used to sample DAG structure and intervention set, and intervention-based causal model (ICM), which predicts the mean and covariance of multivariate Gaussian distribution for next observation and intervention sets. They evaluate the proposed method on two physical system simulation datasets in terms of both the counterfactual reasoning and future forecasting tasks.
Review
The example used for illustration of nonstationary causal structures might be inappropriate. Since the edges indicate interaction between objects in physical systems, that is, each edge is bidirectional, the resulting structure cannot be simply seen as a causal graph.
In the abstract, the authors claim that they extend the existing intervention-based causal discovery framework for videos to formulate the instantaneous change of the causal structure. However, in the subsequent sections (e.g., section 3.2) they also say that "they neither consider the instantaneous edges nor edges that go back in time in this work." This might be a bit unclear and confusing. The authors should explicitly explain what the instantaneous change is.
In this work, the authors assume the non-Markovian setting and thus adopt a recurrent network to accumulate all the previous information for predicting the causal graph. This could introduce some spurious correlations/edges to the predicted graph. how to fix it?
Throughout the paper, the authors assume that all the variables, from which the causal structure is inferred, are given. However, this is not the case in most if not all vision tasks. It is intuitive to imagine that the badly-learned variables would, without doubt, lead to the unsatisfactory performance. It seems that this kind of issues are neither discussed nor even mentioned in the paper.
In Figure 2e, why are there two different
M
1
→
2
? In
I
1
, if I understand correctly, it should indicate an intervention on
x
1
3
. If so, then why is there still an arrow entering it? So does
I
2
. Do their matrix representations in Figure 2e correspond to the causal graphs in Figure 2d?
Subscripts and superscripts are inconsistent throughout the paper. E.g., in Figure 2 I believe the subscripts indicate the time step, whilst the superscripts indicate the time step at some other places.
In Figure 3, I think there also exist some arrows from
I
t
to
μ
t
and
Σ
t
and to
μ
~
t
and
Σ
~
t
. Because it is
I
t
that leads to
ϕ
and
ψ
. At the end of this caption, it should be "resulting in", instead of "resulting".
In Section 3.2, the authors introduce a new concept of agent. What does it mean? sources? states? It would be a bit confusing to introduce something new but without any explanation. Also, the authors do not consider the instantaneous effects in the paper. I am wondering how practical this assumption is in the real world applications. Additionally, the authors claim that "this feature makes our causal graph fully identifiable in the context of video sequence". What does "identifiable" mean here? In what sense is it fully identifiable? All these are not that straightforward and need more clarification.
What is the range of K of the intervention family? Why do you use
k
=
2
, i.e., assume only one intervention family in this paper? What is the dimension of
h
t
and how to determine it?
All elements of
M
t
and
I
t
are assumed to be mutually independent, which will lead to a huge search space when the number of variables is large. How to address this issue?
It seems that the regularization on DAG does not have guarantee on NO instantaneous causal relations. Therefore, I am wondering how the objective (15) work in this regard.
The authors use the extracted visual features from input videos in the previous state-of-the-art method to improve the model performance. If I understand correctly, each dimension of the extracted features will be seen as one variable, and the goal is to discover the causal structure over all these variables(i.e., dimensions), right? Then, this will go back to my previous question: what if "the bad features" are learned?
I would like to see more complex experiments on more realistic datasets. For example, it would be better to do some experiments on the simulated data with some distractors by replacing backgrounds with some natural videos [1]. The present experiments are too simple to verify the performance of the proposed approach.
Last but not least, I did not see any explicit assumptions on how nonstationary data change (e.g., which mechanism varies in time and which not, etc.). Without such assumptions over the underlying generative process, apparently it is generally impossible to predict the future observations. This is the most fundamental issue which has to be addressed or clarified in the paper.
References:
[1] Zhang et al. Learning Invariant Representations for Reinforcement Learning Without Reconstruction. 2021. |
ICLR | Title
Intervention-based Recurrent Casual Model for Non-stationary Video Causal Discovery
Abstract
Nonstationary causal structures are prevalent in real-world physical systems. For example, the stacked blocks interact until they fall apart, while the billiard balls move independently until they collide. However, most video causal discovery methods can not discover such nonstationary casual structures due to the lack of modeling for the instantaneous change and the dynamics of the causal structure. In this work, we propose the Intervention-based Recurrent Casual Model (IRCM) for nonstationary video casual discovery. First, we extend the existing intervention-based casual discovery framework for videos to formulate the instantaneous change of the causal structure in a principled manner. Then, we use a recurrent model to sequentially predict the causal structure model based on previous observations to capture the nonstationary dynamic of the causal structure. We evaluate our method on two popular physical system simulation datasets with various types of multi-body interactions. Experiments show that the proposed IRCM achieves the state-of-the-art performance on both the counterfactual reasoning and future forecasting tasks.
1 INTRODUCTION
Causal reasoning from visual input is essential for intelligence systems in understanding the complex mechanisms in the physical world. For instance, autonomous vehicles need to infer the unseen causal structures on the road that drives the state evolution of other agents across time to anticipate future events better accordingly. One main obstacle in discovering such causal structures is the dynamic nature of events. In Figure 1, we illustrate the varying casual relationship in a simple multi-body system where the stacked blocks fall to the ground. In nonstationary video sequences, the causal structure can have abrupt changes and/or long-term dependencies, posing challenges for casual graphical models (CGM).
For the first challenge, most CGMs in video causal understanding can not handle abrupt causal relationship changes. Li et al. (2020) (VCDN, Figure 2a) partially address this issue by learning a stationary causal summary graph, where causal structures are learned but fixed throughout the video. Zheng et al. (2018) (DYNOTEARS, Figure 2b) relaxed such fixed structure settings by assuming a
stationary order for the period bigger than 1. On the other hand, Brouillard et al. (2020) (DCDI, Figure 2c) recently proposes a differentiable causal model for a spatial graph to naturally capture the abrupt change of probability distributions during interventions. In this work, we naturally extend the intervention-based causal model to the graph with time-leg edges in videos, i.e., current objects’ states are fully determined by previous states (Figure 2d).
For the second challenge, most CGMs in video causal understanding purely depend on the object state observations. That is the causal graph at time t is conditionally independent from the causal graph at time t − 1 given the object states’ observations. Illustrated in Figure 1, CGMs that can be represented as graphs can be modeled as a trajectory in the nonstationary video. In this work, we adopt a recurrent network to sequentially predict CGM to model the trajectories.
Based on the intuitions above, we propose the Intervention-based Recurrent Casual Model (IRCM) to better capture the dynamics in nonstationary videos. As the ground truth CGMs are often not directly measurable, we adopt two popular downstream tasks to benchmark the efficacy of the proposed model: counterfactual reasoning and future state forecasting. Deducing the alternative results countering the reality over the discovered CGM can directly express the impacts of causality. Also, the causal knowledge endows better insights into which factors affect the target variable and how to manipulate the system properly.
We summarize the contribution of this work as follows:
• We introduce the IRCM model to extend the previous intervention-based causal discovery framework to nonstationary video sequences.
• We propose to use recurrent networks to capture the long-term trajectory of Causal Graph Models (CGM) and provide optimization solution to train recurrent networks together with downstream causal models.
• We achieve state-of-the-art performance on two downstream tasks: counterfactual reasoning and future forecasting on two standard benchmark datasets (CoPhy (Baradel et al., 2020), Fabric Manipulation (Brouillard et al., 2020)) by showing an averaged improvement of 11% across 9 metrics.
2 RELATED WORK
Causal Discovery of Stationary Models. Given the input time-series data, the goal is to uncover one fixed directed acyclic graph (DAG), where edges represent the direct causal relationships among variables. There are two main approaches: observation-based and intervention-based. The observation-based approach fully relies on the passive observation of the input system. Constraintbased methods rely on conditional independence tests as constraint-satisfaction to recover MarkovEquivalent Graphs (Spirtes et al., 2000; Entner & Hoyer, 2010; Colombo et al., 2011). Score-based methods assign a score to each DAG, and perform searching in this score space (Chickering, 2002; Zheng et al., 2018). The third class of methods exploits such asymmetries or causal footprints to uniquely identify a DAG (Shimizu, 2014; Zhang & Hyvärinen, 2009).
In practice, domain experts may design interventional experiments and collect additional data of the input system. The intervention-based approach aims to combine such interventional data with the observational data for a better identifiability of the causal structure (Eberhardt, 2012; Eberhardt et al., 2012). However, many of current approaches (Hyttinen et al., 2013; Ghassami et al., 2018b; Kocaoglu et al., 2017; Wang et al., 2017; Shanmugam et al., 2015; Peters et al., 2016; Rothenhäusler et al., 2015; Ke et al., 2019) either assume full knowledge of the intervention, make strong assumptions about the model class, or have scalability limitations. Recently, Brouillard et al. (2020) utilizes the continuous-constrained framework to model the interventions with neural network models. In contrast, our proposed method aims to uncover nonstationary causal structures.
Causal Discovery of Nonstationary Models. To extend to nonstationary data, recent works discover causal models in each sliding window separately, and then compare and merge them. Adams & MacKay (2007) explicitly detect the change points and divide the time series into stationary processes. To implicitly model the change of the causal model, Huang et al. (2015) assume certain smoothness properties and Zhang et al. (2017) use kernel distribution embeddings to describe shifting probabilistic distributions. Later, the problem was reformulated with the online parameter learning framework (Song et al., 2009; Xing et al., 2010). To tackle the varying instantaneous causal relations, both linear (Ghassami et al., 2018a; Huang et al., 2019; Huang & Zhang, 2019; Huang et al., 2020a) and nonlinear (Huang et al., 2020b) causal models are proposed. Our proposed method treats the nonstationary changes of the system as interventions and re-purposes the intervention-based framework to discover time-varying causal graph structures.
Video Causal Discovery. The relevant literature in the computer vision community has accumulated several efforts to tackle down the challenges of video modeling and prediction (Ye et al., 2019; Hsieh et al., 2018; Yi et al., 2020). Nevertheless, one topic that had enjoyed recent success is reasoning objective dynamics in a video sequence. A line of research attempts to solve this task by modeling the correlations in a spatio-temporal context, such as (Yi* et al., 2020; Chen et al., 2021; Bakhtin et al., 2019; Qi et al., 2021; Zhang et al., 2021). However, focusing on modeling the dependencies substantially might not suffice to offer clear interpretations of object dynamics as we humans do. Addressing this issue, the authors of (Baradel et al., 2020) and (Li et al., 2020) try to make efforts to introduce causal knowledge (Schölkopf et al., 2021; Bengio et al., 2020; Runge et al., 2019) to this task. A few works adapt various topics into such a context. Whereas neither of them is able to fully uncover the causal structure underlying the video sequences.: CoPhyNet (Baradel et al., 2020) derives an alternative output based on a known causal graph; VCDN (Li et al., 2020) focus on recovering the stationary causal structures from the video. Instead, our proposed method apply the new intervention-based method to capture nonstationary causal structures.
3 METHODOLOGY
In this section, we present Intervention-based Recurrent Casual Model (IRCM) for non-stationary video causal discovery. We first give an overview of model architecture, as shown in Figure 3, then dive into two components of IRCM , Recurrent Network and Intervention-based Causal Model.
Recurrent Network Intervention-based Causal Model
Sample
3.1 PROBLEM FORMULATION
We factorize the joint probability of a temporal sequence into a sequential form:
p(x1:T ; θ) = p(x1; θ) T∏ t=2 p(xt|x1:t−1; θ), (1)
where θ is the model parameters to learn. This formulation makes it easy to do future forecasting by conditioning any unknown xt on observed or previously predicted history x1:t−1. For simplicity, we decode multiple frames in an autoregressive way, i.e., at each timestep, we predict x̂t as the mode of p(xt|x1:t−1; θ) and do further prediction conditioning on this prediction. Furthermore, we decompose the density function into Recurrent Network (RN) and Interventionbased Causal Model (ICM) by:
fθ(x t|x1:t−1; θ) = fICM(xt|Mt, It,x1:t−1; θICM) (2) Mt, It ∼ Bern(αt, βt) (3) αt, βt = RN(x1:t−1; θRN) (4)
In this way, we extend the framework of Continuous constrained optimization for structure learning to sequential data.
3.2 MODEL DESIGNS
Intervention-based Casual Model. Formally, given the observed d agents in the scene from time 1 to T , a joint probability distribution f(x) depict their state through time. In the context of Causal Graph Model (CGM) (Pearl et al., 2016), a directed acyclic graph (DAG) G with dT nodes defines f(x), where node xtj is associates with agent j at time step t. Directed edges represents causal relationships. The distribution of agent states at time t can be factorized as:
f(xt|x1:t−1; θ) = d∏ j=1 f(xtj |Pa(xtj); θ), (5)
where Pa(xtj) pertains to the set of parent nodes of x t j in G. Eq. 5 implicitly hypothesizes the causal sufficiency (Peters et al., 2017), i.e., our work does not involve any hidden confounding elements. Also, we neither consider the instantaneous edges nor edges that go back in time in this work. Simply put, Pa(xtj) ⊆ {xij}i<t. This feature makes our causal graph fully identifiable in the context of video sequence as Li et al. (2020).
Eq. 5 allows us to swap f(xtj |Pa(xtj)) with another conditional distribution, which is called interventions. One intervention target set I ⊆ V is a subset of graph nodes where interventions are
exerted. We consider an intervention family I = {Ik}Kk=1. In particular, I1 = ∅ denotes the observed distribution. We furthur use Itk to denote intervened nodes at time t in the kth intervention family. Given an interventional family Ik, we formalize the intervened distribution at time t by:
f (k)(xt) = ∏ j 6∈Itk f (1)(xtj |Pa(xtj)) ∏ j∈Itk f (k)(xtj |Pa(xtj)). (6)
In our case, we use k = 2, assuming only one intervention family. Following Brouillard et al. (2020), we use neural networks (NN) to output the parameters of density function f̃ , e.g., Gaussian.
f (1) = f̃(.; NN(., φtj)), f (2) = f̃(.; NN(., ψtj)), (7)
where φ and ψ are parameters for the observational and interventional density function respectively. Thus, Eq. 6 can be written as:
fICM(x t|Mt, It,x1:t−1; θICM) = ∏ j 6∈It2 f̃ ( xtj ; NN(Mtj x;φtj) ) ∏ j∈It2 f̃ ( xtj ; NN(Mtj x;ψtj) ) , (8)
whereMtj ∈ {0, 1}dT is a binary vector indicating the parents of xtj and is the Hadamard product. In specific, two separate neural networks with identical architecture are used to predict mean vectors and diagonal covariance matrices to parameterize the multivariate Gaussian distributions for our f̃ ,
µt,Σt = NN(Mt x;φt), (9) µ̃t, Σ̃t = NN(Mt x;ψt), (10)
for observational and interventional distributions respectively. In summary, θICM = {φ, ψ}. Causal Graph Sampling. Direct prediction of graph structure in its binary from Mt and It is difficult and can lead to mode collapse. Following DCDI (Brouillard et al., 2020), we choose to capture it through multivariate Bernoulli distributions.
In specific, an upstream module Recurrent Network (RN) will predict real matrix αt and real vector βt, which are of the same shape asMt and It. Then we sample binary values in the following way:
Mt ∼ Bern(αt), (11) It ∼ Bern(βt). (12)
All elements are mutually independent. Optimization difficulty incurred by sampling process is solved by Straight-Through Gumbel estimator (Jang et al., 2016; Maddison et al., 2016).
Recurrent Network. In the Recurrent Network (RN), we are concerned with modeling the distribution of a graph structure given previous observations x1:t−1. We consider all time-lagged but not instantaneous causal relations in this model. Thus at time step t, we need to predict graph structure with all its previous t − 1 frames. We group these graphs into Mt ∈ {0, 1}d2×(t−1). For intervention, it is a vector It ∈ {0, 1}d.
ht = fGRU(h t−1,xt; θGRU) (13)
αt, βt = fMLP(h t; θMLP) (14)
To model the non-stationary nature of real-world physical systems, we use an two-layer Gated Recurrent Unit (GRU) (Chung et al., 2014) to model temporal dependencies and an MLP to predict the likelihood of existing causal relations αt and successful intervention βt. In summary, θRN = {θGRU, θMLP}.
3.3 LEARNING AND INFERENCE
Learning. We do not have access to the ground-truth graph structure. This motivates us to follow DCDI (Brouillard et al., 2020), which serves as the pedestal of our work, to train IRCM in a manner of continuous constrained optimization problem. The core of our objective treats learning by maximizing the regularized log-likelihood in Eq. 8 conditioning on the object states :
L = ∑ k Ex∼Px log f(x)− ζ ∑ (j,t) ||Mtj ||0 − η ∑ t ||It||1
s.t. Tr(eσ(α t))− d = 0
(15)
ζ and η are hyperparameters to control the sparsity of causal graphs and intervention sets respectively. Because we do not consider instantaneous causal relations nor relations go back in time, the learnt graph is guaranteed to be a DAG. Thus, IRCM naturally meets the requirement of the acyclicity constraint Tr(eσ(α
t))− d = 0 (Zheng et al., 2018). In order to estimate the gradient of αt and βt with regard toL, we choose to follow DCDI (Brouillard et al., 2020) utilize the Straight-Through Gumbel estimator (Jang et al., 2016; Maddison et al., 2016). This is equivalent to using discrete Bernoulli samples during forward passing and Gumbel-Softmax samples during backpropagation.
Inference. During inference time, as shown in Figure 4, we use the observed and previously predicted sequence {x1:t0 , x̂t0+1:t−1} to predict the multivariate distribution of xt (t0 is the length of observed sequence). We then do a secondary optimization to predict x̂t:
x̂t = arg max xt
( f(xt|x1:t−1; θ ) (16)
= ∑
(Mt,It)
arg max xt
( f(xt|Mt, It; θICM) ) p(Mt, It|x1:t−1; θRN)
x̂tj = ∑
(Mt,It)
(µtj) δ(j 6∈It)(µ̃tj) δ(j∈It)p(Mt, It|x1:t−1; θRN), (17)
where δ(j ∈ It) is the delta function indicating if object i is in the intervention set It. In practice, we take the Monte Carlo approach to first sample (Mt, It) according to the distribution and average the predicted mean values from either the observation and the intervention set.
4 EXPERIMENT
4.1 EXPERIMENTAL SETUPS
Downstream tasks and Datasets. We conduct experiments to understand the efficacy of our proposed IRCM in terms of discovering the causal structure to estiamte the object dynamics across time. More specifically, the counterfactual reasoning and future forecasting in the video sequences are selected to demonstrate this point.
Task 1: Counterfactual Reasoning. This problem is formalized as follows (Baradel et al., 2020): During training, we first infer the causal structure upon a set of visual observations. The objective is to reason the counterfactual outcome given the modified initial object state. The Counterfactual Physics benchmark (CoPhy) (Baradel et al., 2020) dataset contains two types of sequences, observational and counterfactual. The latter sequence is built upon changing the initial object state from the observations with other factors ((such as inertia, gravity or friction)) untouched. CoPhy comprises three physical scenarios in total: BlockTowerCF, BallsCF and CollisionCF. Each scenario provides the 3D positions of all objects in the scene. BlockTowerCF also includes a binary label for stability.
Task 2: Future Forecasting. Future forecasting refers to discerning unknown object future given the observed histories. We use the Fabric Manipulation (FM) (Li et al., 2020) dataset for future forecasting task, where 2D coordinates from learned keypoints in the dynamics scene are provided.
Implementation Details. For both tasks, we use the same model architectures and the same settings for learning and inference. On each dataset, we directly use the extracted visual features from video frames in the previous state-of-the-art methods. Below are the details.
Visual Features. For observation xt, we use the extracted visual features from input videos to improve the model performance. For a fair comparison on CoPhy, we adopt the identical experimental protocols in (Baradel et al., 2020) to examine the generalizability of IRCM. We train and test with 4 objects on BlockTowerCF and BallsCF. The experiments on CollisionCF utilize all types of objects (spheres and cylinders) for both training and test. Moreover, following the settings opted in (Li et al., 2020), we first extract the 2D positions of key points from a pretrained DNN-based mechanism (Kulkarni et al., 2019) to represent the fabrics. Our experiments proceeds by observe first 5 time steps and foresee object states for next 20 time steps for training, and forecast the forthcoming 5 steps upon previous 5 steps for test. We first encode these location information with an MLP as the object states for our model.
Model Architectures. We append two independent three-layer MLPs on a two-layer GRU to predict both αt and βt. At the time instance τ , ατ is then reshaped to a set of d×d matrices forMt. Notably, we zero-padded these matrices to ensure there exists t−1 individual matrix in total per time instance for backpropagation. For the faster learning convergence, we place an instance normalization layer before each ReLU activation in the MLP model and use the sigmoid activation for the final output to make it a probability value.
Learning and Inference. In our experiments, RMSProp optimizer (Goodfellow et al., 2016) are employed with the learning rate initialized at 8 × 10−5 . Our implementation uses PyTorch. The experiments are executed on four Nvidia GeForce TITAN XPs, with 48 GB of memory in total.
Evaluation Metrics. Since none of the aforementioned datasets provide annotations for the causal graphical model, we gauge model performance by the observed object dynamics which is generated from the unobserved causal structure. Thus the ideal metrics should rely on object states, i.e., coordinates and stability. In particular, we aim to understand how close the outcomes can approximate the ground truth. To this end, we calculate the mean square error (MSE) and the negative log-likelihood (NLL) (Ivanovic & Pavone, 2019) on coordinates of objects between ground-truth and prediction. NLL is the average negative log-likelihood between a ground truth trajectory distribution determined by a kernel density estimate and the predicted trajectory. In addition, the stability classification accuracy are used for our experiment on BlockTowerCF. Lower NLL and MSE and higher accuracy are preferred.
4.2 BENCHMARK RESULTS
As per comparing methods, we are primarily interested in assessing our IRCM versus two leading studies on estimating agent states in a video sequence in the context of learning CGM. More specifically, CoPhyNet (Baradel et al., 2020), which achieves cutting-edge results on the CoPhy benchmark and the VCDN framework (Li et al., 2020), which performs best on FM, are selected.
CoPhyNet summarizes the problem with a given causal structure to handle the object dynamics over time and approache object interactions with fully-connected graph convolution (Kipf & Welling, 2016; Battaglia et al., 2018). VCDN provides a model that infers a summary graph consists of timelagged causal relations as shown in Figure 2. To the best of our knowledge. these two methods are the most relevant ones to ours.
We train our algorithm on CoPhy by the exact training objective Eq. 15 on BallsCF, CollisionCF, and FM. For BlockTowerCF, we also include the stability classification term for a fair comparison:
L = ∑ k Ex∼Px log fk(x)− ζ∑ (j,t) ||Mtj ||0 + η ∑ t ||It||1 + CE(Ŝt,St) , (18) where the CE term is the cross entropy between predicted and ground-truth stability. We forward the predicted locations and learntMt to a pre-trained GCN for the stability estimation. It can be seen in Table 1 that out model consistently beat baselines. It demonstrate the necessity of capturing nonstationary causal structures and intervention-based causal discovery.
4.3 ABLATION STUDIES
The proposed IRCM has two main components: Intervention-based Causal Model and Recurrent Network. Below, we justify their design choices with the following ablation studies (Table 2).
Intervention-based Causal Model (ICM). The ICM model relies on the causal DAG structureM and the intervention set I . Below, we demonstrate their necessities by the ablation studies.
Importance of Causal Graphical Model (M, I). IRCM w/oM, I treats the counterfactual reasoning task as future forecasting on both sequences by not transferring the learnt causal structure from observatinoal to counterfactual sequences. We can see in Table 2 that this significantly hurt the performance of IRCM . In fact, IRCM w/o M, I shows the worst scores on both metrics. The comparisons of those values against other methods overwhelmingly demonstrate the necessity and merit to take the causal structure into account for video future forecasting.
Importance of Intervention (I). We justify the advantages of using interventional distribution to discover the causal structure in a video sequence over IRCM w/o I , which directly approximates Eq. 5 from the observations. We can observe the large performance gap between IRCM w/o I and IRCM , demonstrating the impacts of interventions concerning learning the causal structure.
Importance of long-termM. IRCM-markov serves to verify the advantages of IRCM treatingM as a d2×(t−1) matix. The scores of IRCM in Table 2 considerably exceed IRCM-localM. We attribute this to the property of IRCM evidently offering better capability to learn the causal relationships than setting t = 2. The advantages of IRCM also convey the message that the impacts of the agent states in several previous time instances can impact on the current agent states. Additionally, the results favor IRCM over CoPhyNet (Baradel et al., 2020) can be attributed to a similar reason.
Recurrent Network (RN). Instead of the sequential modeling of the causal graphical structures with RN, we can predict a single structure or a sequence of structures that are temporally independent.
Importance of Nonstationary Modeling. IRCM-stationary assumes an invariant causal structure over time, thus shares the similar idea with V-CDN (Li et al., 2020), i.e., we assume that the learned
(Mt, It) and the weight of NN remain static. As shown in Table 2, IRCM significantly outperforms IRCM-stationary, fitting better the time-varying structures in the video sequences. This result emphasizes the importance of considering nonstationary structures in temporal modeling.
Importance of Sequential Modeling. We evaluate the advantages of extrapolatingMt through our RN against IRCM-indep that learns Mt independently at each time step. Table 2 suggest that our IRCM significantly outperforms IRCM-indep, demonstrating the advantages of the sequential modeling of causal structures.
5 CONCLUSION
In this paper, we propose an intervention-based recurrent casual model for video causal discovery. IRCM differs from works the literature in that it introduces the interventions to discover the causal structure for understanding the object dynamics in video sequences. At its core, we introduce a recurrent network to model the interventional distributions. This formulation allows us to grasp the time-varying property that widely exists in video sequences. Experiment results justify that our IRCM delivers better performance in both counterfactual reasoning and future forecasting compared with prior works. One direction is to loose the sufficiency assumption and involve the confounding elements to our framework to enable discovering the causal relationships in real-world applications. | 1. What is the focus of the paper regarding causal discovery?
2. What are the strengths of the proposed approach, particularly in extending the existing framework?
3. What are the weaknesses of the method, especially in its assumptions and limitations?
4. Do you have any concerns about the effectiveness of the framework in learning causal structures?
5. How does the reviewer assess the performance of the method in the downstream tasks and its comparison with other works? | Summary Of The Paper
Review | Summary Of The Paper
Causal discovery for video applications is an important problem. This paper focuses on discoverying non-stationary causal structures in real-world physical systems. It proposes the Intervention-based Recurrent Casual Model (IRCM) which extends the existing intervention-based casual discovery framework for videos. Then it uses a recurrent model to sequentially predict the causal structure model based on previous observations. The method are evaluated on two physical system simulation datasets.
Review
Strengths
The paper extends the previous intervention-based causal discovery framework to non-stationary video sequences (IRCM).
It applies recurrent networks to capture the long-term trajectory of causal graph models.
It achieve outperforms prior methods, CoPhyNet V-CDN on two downstream tasks: counterfactual reasoning and future forecasting on two standard benchmark datasets.
Weaknesses
The key novel contribution seems to be the use of recurrent networks that enables the sampling of DAGs.
The framework assumes there are no confounders which could be very limiting for real-world applications.
The framework does not assume ground truth causal graphs are available. It is not obvious to me Eqn. 15 can effectively learn the causal structure.
The evaluation also does not provide direct evidences that the framework can discover true causal graphs. |
ICLR | Title
Intervention-based Recurrent Casual Model for Non-stationary Video Causal Discovery
Abstract
Nonstationary causal structures are prevalent in real-world physical systems. For example, the stacked blocks interact until they fall apart, while the billiard balls move independently until they collide. However, most video causal discovery methods can not discover such nonstationary casual structures due to the lack of modeling for the instantaneous change and the dynamics of the causal structure. In this work, we propose the Intervention-based Recurrent Casual Model (IRCM) for nonstationary video casual discovery. First, we extend the existing intervention-based casual discovery framework for videos to formulate the instantaneous change of the causal structure in a principled manner. Then, we use a recurrent model to sequentially predict the causal structure model based on previous observations to capture the nonstationary dynamic of the causal structure. We evaluate our method on two popular physical system simulation datasets with various types of multi-body interactions. Experiments show that the proposed IRCM achieves the state-of-the-art performance on both the counterfactual reasoning and future forecasting tasks.
1 INTRODUCTION
Causal reasoning from visual input is essential for intelligence systems in understanding the complex mechanisms in the physical world. For instance, autonomous vehicles need to infer the unseen causal structures on the road that drives the state evolution of other agents across time to anticipate future events better accordingly. One main obstacle in discovering such causal structures is the dynamic nature of events. In Figure 1, we illustrate the varying casual relationship in a simple multi-body system where the stacked blocks fall to the ground. In nonstationary video sequences, the causal structure can have abrupt changes and/or long-term dependencies, posing challenges for casual graphical models (CGM).
For the first challenge, most CGMs in video causal understanding can not handle abrupt causal relationship changes. Li et al. (2020) (VCDN, Figure 2a) partially address this issue by learning a stationary causal summary graph, where causal structures are learned but fixed throughout the video. Zheng et al. (2018) (DYNOTEARS, Figure 2b) relaxed such fixed structure settings by assuming a
stationary order for the period bigger than 1. On the other hand, Brouillard et al. (2020) (DCDI, Figure 2c) recently proposes a differentiable causal model for a spatial graph to naturally capture the abrupt change of probability distributions during interventions. In this work, we naturally extend the intervention-based causal model to the graph with time-leg edges in videos, i.e., current objects’ states are fully determined by previous states (Figure 2d).
For the second challenge, most CGMs in video causal understanding purely depend on the object state observations. That is the causal graph at time t is conditionally independent from the causal graph at time t − 1 given the object states’ observations. Illustrated in Figure 1, CGMs that can be represented as graphs can be modeled as a trajectory in the nonstationary video. In this work, we adopt a recurrent network to sequentially predict CGM to model the trajectories.
Based on the intuitions above, we propose the Intervention-based Recurrent Casual Model (IRCM) to better capture the dynamics in nonstationary videos. As the ground truth CGMs are often not directly measurable, we adopt two popular downstream tasks to benchmark the efficacy of the proposed model: counterfactual reasoning and future state forecasting. Deducing the alternative results countering the reality over the discovered CGM can directly express the impacts of causality. Also, the causal knowledge endows better insights into which factors affect the target variable and how to manipulate the system properly.
We summarize the contribution of this work as follows:
• We introduce the IRCM model to extend the previous intervention-based causal discovery framework to nonstationary video sequences.
• We propose to use recurrent networks to capture the long-term trajectory of Causal Graph Models (CGM) and provide optimization solution to train recurrent networks together with downstream causal models.
• We achieve state-of-the-art performance on two downstream tasks: counterfactual reasoning and future forecasting on two standard benchmark datasets (CoPhy (Baradel et al., 2020), Fabric Manipulation (Brouillard et al., 2020)) by showing an averaged improvement of 11% across 9 metrics.
2 RELATED WORK
Causal Discovery of Stationary Models. Given the input time-series data, the goal is to uncover one fixed directed acyclic graph (DAG), where edges represent the direct causal relationships among variables. There are two main approaches: observation-based and intervention-based. The observation-based approach fully relies on the passive observation of the input system. Constraintbased methods rely on conditional independence tests as constraint-satisfaction to recover MarkovEquivalent Graphs (Spirtes et al., 2000; Entner & Hoyer, 2010; Colombo et al., 2011). Score-based methods assign a score to each DAG, and perform searching in this score space (Chickering, 2002; Zheng et al., 2018). The third class of methods exploits such asymmetries or causal footprints to uniquely identify a DAG (Shimizu, 2014; Zhang & Hyvärinen, 2009).
In practice, domain experts may design interventional experiments and collect additional data of the input system. The intervention-based approach aims to combine such interventional data with the observational data for a better identifiability of the causal structure (Eberhardt, 2012; Eberhardt et al., 2012). However, many of current approaches (Hyttinen et al., 2013; Ghassami et al., 2018b; Kocaoglu et al., 2017; Wang et al., 2017; Shanmugam et al., 2015; Peters et al., 2016; Rothenhäusler et al., 2015; Ke et al., 2019) either assume full knowledge of the intervention, make strong assumptions about the model class, or have scalability limitations. Recently, Brouillard et al. (2020) utilizes the continuous-constrained framework to model the interventions with neural network models. In contrast, our proposed method aims to uncover nonstationary causal structures.
Causal Discovery of Nonstationary Models. To extend to nonstationary data, recent works discover causal models in each sliding window separately, and then compare and merge them. Adams & MacKay (2007) explicitly detect the change points and divide the time series into stationary processes. To implicitly model the change of the causal model, Huang et al. (2015) assume certain smoothness properties and Zhang et al. (2017) use kernel distribution embeddings to describe shifting probabilistic distributions. Later, the problem was reformulated with the online parameter learning framework (Song et al., 2009; Xing et al., 2010). To tackle the varying instantaneous causal relations, both linear (Ghassami et al., 2018a; Huang et al., 2019; Huang & Zhang, 2019; Huang et al., 2020a) and nonlinear (Huang et al., 2020b) causal models are proposed. Our proposed method treats the nonstationary changes of the system as interventions and re-purposes the intervention-based framework to discover time-varying causal graph structures.
Video Causal Discovery. The relevant literature in the computer vision community has accumulated several efforts to tackle down the challenges of video modeling and prediction (Ye et al., 2019; Hsieh et al., 2018; Yi et al., 2020). Nevertheless, one topic that had enjoyed recent success is reasoning objective dynamics in a video sequence. A line of research attempts to solve this task by modeling the correlations in a spatio-temporal context, such as (Yi* et al., 2020; Chen et al., 2021; Bakhtin et al., 2019; Qi et al., 2021; Zhang et al., 2021). However, focusing on modeling the dependencies substantially might not suffice to offer clear interpretations of object dynamics as we humans do. Addressing this issue, the authors of (Baradel et al., 2020) and (Li et al., 2020) try to make efforts to introduce causal knowledge (Schölkopf et al., 2021; Bengio et al., 2020; Runge et al., 2019) to this task. A few works adapt various topics into such a context. Whereas neither of them is able to fully uncover the causal structure underlying the video sequences.: CoPhyNet (Baradel et al., 2020) derives an alternative output based on a known causal graph; VCDN (Li et al., 2020) focus on recovering the stationary causal structures from the video. Instead, our proposed method apply the new intervention-based method to capture nonstationary causal structures.
3 METHODOLOGY
In this section, we present Intervention-based Recurrent Casual Model (IRCM) for non-stationary video causal discovery. We first give an overview of model architecture, as shown in Figure 3, then dive into two components of IRCM , Recurrent Network and Intervention-based Causal Model.
Recurrent Network Intervention-based Causal Model
Sample
3.1 PROBLEM FORMULATION
We factorize the joint probability of a temporal sequence into a sequential form:
p(x1:T ; θ) = p(x1; θ) T∏ t=2 p(xt|x1:t−1; θ), (1)
where θ is the model parameters to learn. This formulation makes it easy to do future forecasting by conditioning any unknown xt on observed or previously predicted history x1:t−1. For simplicity, we decode multiple frames in an autoregressive way, i.e., at each timestep, we predict x̂t as the mode of p(xt|x1:t−1; θ) and do further prediction conditioning on this prediction. Furthermore, we decompose the density function into Recurrent Network (RN) and Interventionbased Causal Model (ICM) by:
fθ(x t|x1:t−1; θ) = fICM(xt|Mt, It,x1:t−1; θICM) (2) Mt, It ∼ Bern(αt, βt) (3) αt, βt = RN(x1:t−1; θRN) (4)
In this way, we extend the framework of Continuous constrained optimization for structure learning to sequential data.
3.2 MODEL DESIGNS
Intervention-based Casual Model. Formally, given the observed d agents in the scene from time 1 to T , a joint probability distribution f(x) depict their state through time. In the context of Causal Graph Model (CGM) (Pearl et al., 2016), a directed acyclic graph (DAG) G with dT nodes defines f(x), where node xtj is associates with agent j at time step t. Directed edges represents causal relationships. The distribution of agent states at time t can be factorized as:
f(xt|x1:t−1; θ) = d∏ j=1 f(xtj |Pa(xtj); θ), (5)
where Pa(xtj) pertains to the set of parent nodes of x t j in G. Eq. 5 implicitly hypothesizes the causal sufficiency (Peters et al., 2017), i.e., our work does not involve any hidden confounding elements. Also, we neither consider the instantaneous edges nor edges that go back in time in this work. Simply put, Pa(xtj) ⊆ {xij}i<t. This feature makes our causal graph fully identifiable in the context of video sequence as Li et al. (2020).
Eq. 5 allows us to swap f(xtj |Pa(xtj)) with another conditional distribution, which is called interventions. One intervention target set I ⊆ V is a subset of graph nodes where interventions are
exerted. We consider an intervention family I = {Ik}Kk=1. In particular, I1 = ∅ denotes the observed distribution. We furthur use Itk to denote intervened nodes at time t in the kth intervention family. Given an interventional family Ik, we formalize the intervened distribution at time t by:
f (k)(xt) = ∏ j 6∈Itk f (1)(xtj |Pa(xtj)) ∏ j∈Itk f (k)(xtj |Pa(xtj)). (6)
In our case, we use k = 2, assuming only one intervention family. Following Brouillard et al. (2020), we use neural networks (NN) to output the parameters of density function f̃ , e.g., Gaussian.
f (1) = f̃(.; NN(., φtj)), f (2) = f̃(.; NN(., ψtj)), (7)
where φ and ψ are parameters for the observational and interventional density function respectively. Thus, Eq. 6 can be written as:
fICM(x t|Mt, It,x1:t−1; θICM) = ∏ j 6∈It2 f̃ ( xtj ; NN(Mtj x;φtj) ) ∏ j∈It2 f̃ ( xtj ; NN(Mtj x;ψtj) ) , (8)
whereMtj ∈ {0, 1}dT is a binary vector indicating the parents of xtj and is the Hadamard product. In specific, two separate neural networks with identical architecture are used to predict mean vectors and diagonal covariance matrices to parameterize the multivariate Gaussian distributions for our f̃ ,
µt,Σt = NN(Mt x;φt), (9) µ̃t, Σ̃t = NN(Mt x;ψt), (10)
for observational and interventional distributions respectively. In summary, θICM = {φ, ψ}. Causal Graph Sampling. Direct prediction of graph structure in its binary from Mt and It is difficult and can lead to mode collapse. Following DCDI (Brouillard et al., 2020), we choose to capture it through multivariate Bernoulli distributions.
In specific, an upstream module Recurrent Network (RN) will predict real matrix αt and real vector βt, which are of the same shape asMt and It. Then we sample binary values in the following way:
Mt ∼ Bern(αt), (11) It ∼ Bern(βt). (12)
All elements are mutually independent. Optimization difficulty incurred by sampling process is solved by Straight-Through Gumbel estimator (Jang et al., 2016; Maddison et al., 2016).
Recurrent Network. In the Recurrent Network (RN), we are concerned with modeling the distribution of a graph structure given previous observations x1:t−1. We consider all time-lagged but not instantaneous causal relations in this model. Thus at time step t, we need to predict graph structure with all its previous t − 1 frames. We group these graphs into Mt ∈ {0, 1}d2×(t−1). For intervention, it is a vector It ∈ {0, 1}d.
ht = fGRU(h t−1,xt; θGRU) (13)
αt, βt = fMLP(h t; θMLP) (14)
To model the non-stationary nature of real-world physical systems, we use an two-layer Gated Recurrent Unit (GRU) (Chung et al., 2014) to model temporal dependencies and an MLP to predict the likelihood of existing causal relations αt and successful intervention βt. In summary, θRN = {θGRU, θMLP}.
3.3 LEARNING AND INFERENCE
Learning. We do not have access to the ground-truth graph structure. This motivates us to follow DCDI (Brouillard et al., 2020), which serves as the pedestal of our work, to train IRCM in a manner of continuous constrained optimization problem. The core of our objective treats learning by maximizing the regularized log-likelihood in Eq. 8 conditioning on the object states :
L = ∑ k Ex∼Px log f(x)− ζ ∑ (j,t) ||Mtj ||0 − η ∑ t ||It||1
s.t. Tr(eσ(α t))− d = 0
(15)
ζ and η are hyperparameters to control the sparsity of causal graphs and intervention sets respectively. Because we do not consider instantaneous causal relations nor relations go back in time, the learnt graph is guaranteed to be a DAG. Thus, IRCM naturally meets the requirement of the acyclicity constraint Tr(eσ(α
t))− d = 0 (Zheng et al., 2018). In order to estimate the gradient of αt and βt with regard toL, we choose to follow DCDI (Brouillard et al., 2020) utilize the Straight-Through Gumbel estimator (Jang et al., 2016; Maddison et al., 2016). This is equivalent to using discrete Bernoulli samples during forward passing and Gumbel-Softmax samples during backpropagation.
Inference. During inference time, as shown in Figure 4, we use the observed and previously predicted sequence {x1:t0 , x̂t0+1:t−1} to predict the multivariate distribution of xt (t0 is the length of observed sequence). We then do a secondary optimization to predict x̂t:
x̂t = arg max xt
( f(xt|x1:t−1; θ ) (16)
= ∑
(Mt,It)
arg max xt
( f(xt|Mt, It; θICM) ) p(Mt, It|x1:t−1; θRN)
x̂tj = ∑
(Mt,It)
(µtj) δ(j 6∈It)(µ̃tj) δ(j∈It)p(Mt, It|x1:t−1; θRN), (17)
where δ(j ∈ It) is the delta function indicating if object i is in the intervention set It. In practice, we take the Monte Carlo approach to first sample (Mt, It) according to the distribution and average the predicted mean values from either the observation and the intervention set.
4 EXPERIMENT
4.1 EXPERIMENTAL SETUPS
Downstream tasks and Datasets. We conduct experiments to understand the efficacy of our proposed IRCM in terms of discovering the causal structure to estiamte the object dynamics across time. More specifically, the counterfactual reasoning and future forecasting in the video sequences are selected to demonstrate this point.
Task 1: Counterfactual Reasoning. This problem is formalized as follows (Baradel et al., 2020): During training, we first infer the causal structure upon a set of visual observations. The objective is to reason the counterfactual outcome given the modified initial object state. The Counterfactual Physics benchmark (CoPhy) (Baradel et al., 2020) dataset contains two types of sequences, observational and counterfactual. The latter sequence is built upon changing the initial object state from the observations with other factors ((such as inertia, gravity or friction)) untouched. CoPhy comprises three physical scenarios in total: BlockTowerCF, BallsCF and CollisionCF. Each scenario provides the 3D positions of all objects in the scene. BlockTowerCF also includes a binary label for stability.
Task 2: Future Forecasting. Future forecasting refers to discerning unknown object future given the observed histories. We use the Fabric Manipulation (FM) (Li et al., 2020) dataset for future forecasting task, where 2D coordinates from learned keypoints in the dynamics scene are provided.
Implementation Details. For both tasks, we use the same model architectures and the same settings for learning and inference. On each dataset, we directly use the extracted visual features from video frames in the previous state-of-the-art methods. Below are the details.
Visual Features. For observation xt, we use the extracted visual features from input videos to improve the model performance. For a fair comparison on CoPhy, we adopt the identical experimental protocols in (Baradel et al., 2020) to examine the generalizability of IRCM. We train and test with 4 objects on BlockTowerCF and BallsCF. The experiments on CollisionCF utilize all types of objects (spheres and cylinders) for both training and test. Moreover, following the settings opted in (Li et al., 2020), we first extract the 2D positions of key points from a pretrained DNN-based mechanism (Kulkarni et al., 2019) to represent the fabrics. Our experiments proceeds by observe first 5 time steps and foresee object states for next 20 time steps for training, and forecast the forthcoming 5 steps upon previous 5 steps for test. We first encode these location information with an MLP as the object states for our model.
Model Architectures. We append two independent three-layer MLPs on a two-layer GRU to predict both αt and βt. At the time instance τ , ατ is then reshaped to a set of d×d matrices forMt. Notably, we zero-padded these matrices to ensure there exists t−1 individual matrix in total per time instance for backpropagation. For the faster learning convergence, we place an instance normalization layer before each ReLU activation in the MLP model and use the sigmoid activation for the final output to make it a probability value.
Learning and Inference. In our experiments, RMSProp optimizer (Goodfellow et al., 2016) are employed with the learning rate initialized at 8 × 10−5 . Our implementation uses PyTorch. The experiments are executed on four Nvidia GeForce TITAN XPs, with 48 GB of memory in total.
Evaluation Metrics. Since none of the aforementioned datasets provide annotations for the causal graphical model, we gauge model performance by the observed object dynamics which is generated from the unobserved causal structure. Thus the ideal metrics should rely on object states, i.e., coordinates and stability. In particular, we aim to understand how close the outcomes can approximate the ground truth. To this end, we calculate the mean square error (MSE) and the negative log-likelihood (NLL) (Ivanovic & Pavone, 2019) on coordinates of objects between ground-truth and prediction. NLL is the average negative log-likelihood between a ground truth trajectory distribution determined by a kernel density estimate and the predicted trajectory. In addition, the stability classification accuracy are used for our experiment on BlockTowerCF. Lower NLL and MSE and higher accuracy are preferred.
4.2 BENCHMARK RESULTS
As per comparing methods, we are primarily interested in assessing our IRCM versus two leading studies on estimating agent states in a video sequence in the context of learning CGM. More specifically, CoPhyNet (Baradel et al., 2020), which achieves cutting-edge results on the CoPhy benchmark and the VCDN framework (Li et al., 2020), which performs best on FM, are selected.
CoPhyNet summarizes the problem with a given causal structure to handle the object dynamics over time and approache object interactions with fully-connected graph convolution (Kipf & Welling, 2016; Battaglia et al., 2018). VCDN provides a model that infers a summary graph consists of timelagged causal relations as shown in Figure 2. To the best of our knowledge. these two methods are the most relevant ones to ours.
We train our algorithm on CoPhy by the exact training objective Eq. 15 on BallsCF, CollisionCF, and FM. For BlockTowerCF, we also include the stability classification term for a fair comparison:
L = ∑ k Ex∼Px log fk(x)− ζ∑ (j,t) ||Mtj ||0 + η ∑ t ||It||1 + CE(Ŝt,St) , (18) where the CE term is the cross entropy between predicted and ground-truth stability. We forward the predicted locations and learntMt to a pre-trained GCN for the stability estimation. It can be seen in Table 1 that out model consistently beat baselines. It demonstrate the necessity of capturing nonstationary causal structures and intervention-based causal discovery.
4.3 ABLATION STUDIES
The proposed IRCM has two main components: Intervention-based Causal Model and Recurrent Network. Below, we justify their design choices with the following ablation studies (Table 2).
Intervention-based Causal Model (ICM). The ICM model relies on the causal DAG structureM and the intervention set I . Below, we demonstrate their necessities by the ablation studies.
Importance of Causal Graphical Model (M, I). IRCM w/oM, I treats the counterfactual reasoning task as future forecasting on both sequences by not transferring the learnt causal structure from observatinoal to counterfactual sequences. We can see in Table 2 that this significantly hurt the performance of IRCM . In fact, IRCM w/o M, I shows the worst scores on both metrics. The comparisons of those values against other methods overwhelmingly demonstrate the necessity and merit to take the causal structure into account for video future forecasting.
Importance of Intervention (I). We justify the advantages of using interventional distribution to discover the causal structure in a video sequence over IRCM w/o I , which directly approximates Eq. 5 from the observations. We can observe the large performance gap between IRCM w/o I and IRCM , demonstrating the impacts of interventions concerning learning the causal structure.
Importance of long-termM. IRCM-markov serves to verify the advantages of IRCM treatingM as a d2×(t−1) matix. The scores of IRCM in Table 2 considerably exceed IRCM-localM. We attribute this to the property of IRCM evidently offering better capability to learn the causal relationships than setting t = 2. The advantages of IRCM also convey the message that the impacts of the agent states in several previous time instances can impact on the current agent states. Additionally, the results favor IRCM over CoPhyNet (Baradel et al., 2020) can be attributed to a similar reason.
Recurrent Network (RN). Instead of the sequential modeling of the causal graphical structures with RN, we can predict a single structure or a sequence of structures that are temporally independent.
Importance of Nonstationary Modeling. IRCM-stationary assumes an invariant causal structure over time, thus shares the similar idea with V-CDN (Li et al., 2020), i.e., we assume that the learned
(Mt, It) and the weight of NN remain static. As shown in Table 2, IRCM significantly outperforms IRCM-stationary, fitting better the time-varying structures in the video sequences. This result emphasizes the importance of considering nonstationary structures in temporal modeling.
Importance of Sequential Modeling. We evaluate the advantages of extrapolatingMt through our RN against IRCM-indep that learns Mt independently at each time step. Table 2 suggest that our IRCM significantly outperforms IRCM-indep, demonstrating the advantages of the sequential modeling of causal structures.
5 CONCLUSION
In this paper, we propose an intervention-based recurrent casual model for video causal discovery. IRCM differs from works the literature in that it introduces the interventions to discover the causal structure for understanding the object dynamics in video sequences. At its core, we introduce a recurrent network to model the interventional distributions. This formulation allows us to grasp the time-varying property that widely exists in video sequences. Experiment results justify that our IRCM delivers better performance in both counterfactual reasoning and future forecasting compared with prior works. One direction is to loose the sufficiency assumption and involve the confounding elements to our framework to enable discovering the causal relationships in real-world applications. | 1. What is the main contribution of the paper regarding causal discovery in dynamic environments?
2. What are the strengths and weaknesses of the proposed method, particularly in comparison to previous works like DCDI?
3. Do you have any questions or concerns regarding the experimental design and results?
4. Are there any clarification questions regarding the notation, equations, or methodology used in the paper?
5. Are there any minor issues or typos that can be improved in the paper? | Summary Of The Paper
Review | Summary Of The Paper
This paper targets the problem of causal discovery in dynamic environments. Their motivation is that previous methods do not consider the non-stationary property of the environment or do not use the intervention tool. In their method, they combine Recurrent Neural Networks and Intervention-based Causal Model (ICM) to build a model named Intervention-based Recurrent Casual Model (IRCM). Due to the difficulty of evaluating causal graphs, the authors use two popular downstream tasks to benchmark the efficacy of the proposed model: counterfactual reasoning and future state forecasting.
Review
General Strengths and weaknesses
+ The problem studied in this paper is definitely important in many real-world applications, such as robotics decision-making and autonomous driving. Discovering the underlying causation is important for agents to make reasonable decisions, especially in dynamic environments.
+ The method proposed in this paper is interesting and technically correct. Intuitively, using GRU to extract sequential information helps capture the changes of causal graphs.
- The main idea of causal discovery by sampling intervention set and causal graphs for masking is similar to DCDI [1]. This paper is more like using DCDI in dynamic environments, which may limit the novelty of this paper.
- The paper is not difficult to follow, but there are several places that are may cause confusion. (listed in point 3).
- The contribution of this paper is not fully supported by experiments.
Main Questions
(1) During the inference stage, why use samples instead of directly taking the argmax of Bernoulli distribution? How many samples are required? Will this sampling cause scalability problems?
(2) In the experiment part, the authors only compare with one method (V-CDN). Is it possible to compare DYNOTEARS with the proposed method?
(3) The authors mention that there is no ground truth to evaluate the causal discovery task. I agree with this opinion since the real world does not provide us causal graphs. However, the first experiment is conducted on a synthetic dataset, where I believe it is able to obtain the causation by checking collision conditions. In other words, I am not convinced only by the prediction results. Could the author provide the learned causal graphs and intervention sets and compare them with ground truth even on a simple synthetic dataset?
Clarification questions
(1) It seems the citation of NOTEARS [2] is wrongly used for DYNOTEARS [3]. This citation is important since DYNOTEARS is one of the motivations of this paper.
(2) ICM part in Figure 3 is not clear. How is the Intervention set I is used? If I understand correctly, function
f
is a prediction model conditioned on history frames.
(3) The term “Bern” in equation (3) is not defined. I assume it is the Bernoulli distribution. Then what does the symbol
B
e
r
n
(
α
t
,
β
t
)
mean?
(4) According to equation (7), each node
j
has its own parameters
ϕ
j
t
and
ψ
j
t
. Could the authors explain why the parameters are related to time?
(5) In equation (16), the authors mention the term “ secondary optimization”. I can’t find any reference for it. Could the author provide more information?
Minor things:
(1) In the caption of Figure 2, the authors say “For nonstationary causal models, (c)….”. But in figure (c) belongs to stationary methods.
[1] Brouillard P, Lachapelle S, Lacoste A, et al. Differentiable causal discovery from interventional data[J]. arXiv preprint arXiv:2007.01754, 2020.
[2] Zheng X, Aragam B, Ravikumar P, et al. Dags with no tears: Continuous optimization for structure learning[J]. arXiv preprint arXiv:1803.01422, 2018.
[3] Pamfil R, Sriwattanaworachai N, Desai S, et al. Dynotears: Structure learning from time-series data[C]//International Conference on Artificial Intelligence and Statistics. PMLR, 2020: 1595-1605. |
ICLR | Title
Robust Models Are More Interpretable Because Attributions Look Normal
Abstract
Recent work has found that adversarially-robust deep networks used for image classification are more interpretable: their feature attributions tend to be sharper, and are more concentrated on the objects associated with the image’s groundtruth class. We show that smooth decision boundaries play an important role in this enhanced interpretability, as the model’s input gradients around data points will more closely align with boundaries’ normal vectors when they are smooth. Thus, because robust models have smoother boundaries, the results of gradientbased attribution methods, like Integrated Gradients and DeepLift, will capture more accurate information about nearby decision boundaries. This understanding of robust interpretability leads to our second contribution: boundary attributions, which aggregate information about the normal vectors of local decision boundaries to explain a classification outcome. We show that by leveraging the key factors underpinning robust interpretability, boundary attributions produce sharper, more concentrated visual explanations—even on non-robust models.
1 INTRODUCTION
Feature attribution methods are widely used to explain the predictions of neural networks (Binder et al., 2016; Dhamdhere et al., 2019; Fong & Vedaldi, 2017; Leino et al., 2018; Montavon et al., 2015; Selvaraju et al., 2017; Shrikumar et al., 2017; Simonyan et al., 2013; Smilkov et al., 2017; Springenberg et al., 2014; Sundararajan et al., 2017). By assigning an importance score to each input feature of the model, these techniques help to focus attention on parts of the data most responsible for the model’s observed behavior. Recent work (Croce et al., 2019; Etmann et al., 2019) has observed that feature attributions in adversarially-robust image models, when visualized, tend to be more interpretable—the attributions correspond more clearly to the discriminative portions of the input.
One way to explain the observation relies on the fact that robust models do not make use of nonrobust features (Ilyas et al., 2019) whose statistical meaning can change with small, imperceptible changes in the source data. Thus, by using only robust features to predict, these models naturally tend to line up with visibly-relevant portions of the image. Etmann et al. take a different approach, showing that the gradients of robust models’ outputs more closely align with their inputs, which explains why attributions on image models are more visually interpretable.
In this paper, we build on this geometric understanding of robust interpretability. With both analytical (Sec. 3) and empirical (Sec. 5) results, we show that the gradient of the model with respect to its input, which is the basic building block of all gradient-based attribution methods, tends to be more closely aligned with the normal vector of a nearby decision boundary in robust models than in “normal” models. Leveraging this understanding, we propose Boundary-based Saliency Map (BSM) and Boundary-based Integrated Gradient (BIG), two variants of boundary attributions (Sec. 4), which base attributions on information about nearby decision boundaries (see an illustration in Fig. 1a). While BSM provides theoretical guarantees in the closed-form, BIG generates both quantitatively and qualitatively better explanations. We show that these methods satisfy several desireable formal properties, and that even on non-robust models, the resulting attributions are more focused (Fig. 1b) and less sensitive to the “baseline” parameters required by some attribution methods.
To summarize, our main contributions are as follows. (1) We present an analysis that sheds light on the previously-observed phenomeon of robust interpretability showing that alignment between the normal vectors of decision boundaries and models’ gradients is a key ingredient (Proposition 1).
In particular, we derive a closed-form result for one-layer networks (Theorem 1) and empirically validate the take-away of our theorem generalizes to deeper networks. (2) Motivated by our analysis, we introduce boundary attributions, which leverage the connection between boundary normal vectors and gradients to yield explanations for non-robust models that carry over many of the favorable properties that have been observed of explanations on robust models. (3) We empirically demonstrate that one such type of boundary attribution, called Boundary-based Integrated Gradients (BIG), produces explanations that are more accurate than prior attribution methods (relative to ground-truth bounding box information), while mitigating the problem of baseline sensitivity that is known to impact applications of Integrated Gradients Sundararajan et al. (2017) (Section 6).
2 BACKGROUND
We begin by introducing our notations. Throughout the paper we use italicized symbols x to denote scalar quantities and bold-face x to denote vectors. We consider neural networks with ReLU as activations prior to the top layer, and a softmax activation at the top. The predicted label for a given input x is given by F (x) = argmaxc fc(x),x ∈ Rd, where F (x) is the predicted label and fi(x) is the output on the class i. As the softmax layer does not change the ranking of neurons in the top layer, we will assume that fi(x) denotes the pre-softmax score. Unless otherwise noted, we use ||x|| to denote the `2 norm of x, and the `2 neighborhood centered at x with radius as B(x, ).
Explainability. Feature attribution methods are widely-used to explain the predictions made by DNNs, by assigning importance scores for the network’s output to each input feature. Conventionally, scores with greater magnitude indicate that the corresponding feature was more relevant to the predicted outcome. We denote feature attributions by z = g(x, f), z,x ∈ Rd. When f is clear from the context, we simply write g(x). While there is an extensive and growing literature on attribution methods, our analysis will focus closely on the popular gradient-based methods, Saliency Map (Simonyan et al., 2013), Integrated Gradient (Sundararajan et al., 2017) and Smooth Gradient (Smilkov et al., 2017), shown in Defs 1-3.
Definition 1 (Saliency Map (SM)) The Saliency Map gS(x) is given by gS(x) := ∂f(x)∂x .
Definition 2 (Integrated Gradient (IG)) Given a baseline input xb, the Integrated Gradient gIG(x;xb) is given by gIG(x;xb) := (x− xb) ∫ 1 0 ∂f((x−xb)t+xb) ∂x dt.
Under review as a conference paper at ICLR 2022
#$
#$
%
%
!! !"
"
#$ %
"!
""
"#
Definition 3 (Smooth Gradient (SG)) Given a zero-centered Gaussian distributionN with a standard deviation σ, the Smooth Gradient gSG(x;σ) is given by gSG(x;σ) := E ∼N (0,σ2I) ∂f(α+ )∂x .
Besides, we will also include results from DeepLIFT (Shrikumar et al., 2017) and grad × input (element-wise multiplication between Saliency Map and the input) (Simonyan et al., 2013) in our empirical evaluation. As we show in Section 3.2, Defs 1-3 satisfy axioms that relate to the local linearity of ReLU networks, and in the case of randomized smoothing (Cohen et al., 2019), their robustness to input perturbations. We further discuss these methods relative to others in Sec. 7.
Robustness. Two relevant concepts about adversarial robustness will be used in this paper: prediction robustness that the model’s output label remains unchanged within a particular `p norm ball and attribution robustness that the feature attributions are similar within the same ball. Recent work has identified the model’s Lipschitz continuity as a bridge between these two concepts (Wang et al., 2020c) and some loss functions in achieving prediction robustness also bring attribution robustness (Chalasani et al., 2020). We refer to robustness as prediction robustness if not otherwise noted.
3 EXPLAINABILITY, DECISION BOUNDARIES, AND ROBUSTNESS
In this section, we begin by discussing the role of decision boundaries in constructing explanations of model behavior via feature attributions. We first illustrate the key relationships in the simpler case of linear models, which contain exactly one boundary, and then generalize to piecewise-linear classifiers as they are embodied by deep ReLU networks. We then show how local robustness causes attribution methods to align more closely with nearby decision boundaries, leading to explanations that better reflect these relationships.
3.1 ATTRIBUTIONS FOR LINEAR MODELS
Consider a binary classifier C(x) = sign(w>x + b) that predicts a label {−1, 1} (ignoring “tie” cases where C(x) = 0, which can be broken arbitrarily). In its feature space, C(x) is a hyperplane H that separates the input space into two open half-spaces S1 and S2 (see Fig. 2a). Accordingly, the normal vector n̂ of the decision boundary is the only vector that faithfully explains the model’s classification while other vectors, while they may describe directions that lead to positive changes in the model’s output score, are not faithful in this sense (see v in Fig. 2a for an example). In practice, to assign attributions for predictions made by C, SM, SG, and the integral part of IG (see Sec. 2) return a vector characterized by z = k1n̂ + k2 (Ancona et al., 2018), where k1 6= 0 and k2 ∈ R, regardless of the input x that is being explained. In other words, these methods all measure the importance of features by characterizing the model’s decision boundary, and are equivalent up to the scale and position of n̂.
3.2 GENERALIZING TO PIECEWISE-LINEAR BOUNDARIES
In the case of a piecewise-linear model, such as a ReLU network, the decision boundaries comprise a collection of hyperplane segments that partition the feature space, as in H1, H2 and H3 in the example shown in Figure 2b. Because the boundary no longer has a single well-defined normal, one intuitive way to extend the relationship between boundaries and attributions developed in the previous section is to capture the normal vector of the closest decision boundary to the input being explained. However, as we show in this section, the methods that succeeded in the case of linear models (SM, SG, and the integral part of IG) may in fact fail to return such attributions in the more general case of piecewise-linear models, but local robustness often remedies this problem. We begin by reviewing key elements of the geometry of ReLU networks (Jordan et al., 2019).
ReLU activation polytopes. For a neuron u in a ReLU network f(x), we say that its status is ON if its pre-activation u(x) ≥ 0, otherwise it is OFF. We can associate an activation pattern denoting the status of each neuron for any point x in the feature space, and a half-space Au to the activation constraint u(x) ≥ 0. Thus, for any point x the intersection of the half-spaces corresponding to its activation pattern defines a polytope P (see Fig. 2b), and within P the network is a linear function such that ∀x ∈ P, f(x) = w>Px + bP , where the parameters wp and bP can be computed by differentiation (Fromherz et al., 2021). Each facet of P (dashed lines in Fig. 2b) corresponds to a boundary that “flips” the status of its corresponding neuron. Similar to activation constraints, decision boundaries are piecewise-linear because each decision boundary corresponds to a constraint fi(x) ≥ fj(x) for two classes i, j (Fromherz et al., 2021; Jordan et al., 2019). Gradients might fail. Saliency maps, which we take to be simply the gradient of the model with respect to its input, can thus be seen as a way to project an input onto a decision boundary. That is, a saliency map is a vector that is normal to a nearby decision boundary segment. However, as others have noted, a saliency map is not always normal to any real boundary segment in the model’s geometry (see the left plot of Fig. 2c), because when the closest boundary segment is not within the activation polytope containing x, the saliency map will instead be normal to the linear extension of some other hyperplane segment (Fromherz et al., 2021). In fact, iterative gradient descent typically outperforms the Fast Gradient Sign Method (Goodfellow et al., 2015) as an attack demonstrates that this is often the case.
When gradients succeed. While saliency maps may not be the best approach in general for capturing information about nearby segments of the model’s decision boundary, there are cases in which it serves as a good approximation. Recent work has proposed using the Lipschitz continuity of an attribution method to characterize the difference between the attributions of an input x and its neighbors within a `p ball neighborhood (Def. 4) (Wang et al., 2020c). This naturally leads to Proposition 1, which states that the difference between the saliency map at an input and the correct normal to the closest boundary segment is bounded by the distance to that segment.
Definition 4 (Attribution Robustness) An attribution method g(x) is (λ, δ)-locally robust at the evaluated point x if ∀x′ ∈ B(x, δ), ||g(x′)− g(x)|| ≤ λ||x′ − x||.
Proposition 1 Suppose that f has a (λ, δ)-robust saliency map gS at x, x′ is the closest point on the closest decision boundary segment to x and ||x′ − x|| ≤ δ, and that n is the normal vector of that boundary segment. Then ||n− gS(x)|| ≤ λ||x− x′||.
Proposition 1 therefore provides the following insight: for networks that admit robust attributions (Chen et al., 2019; Wang et al., 2020c), the saliency map is a good approximation to the boundary vector. As prior work has demonstrated the close correspondence between robust prediction and robust attributions (Wang et al., 2020c; Chalasani et al., 2020), this in turn suggests that explanations on robust models will more closely resemble boundary normals.
As training robust models can be expensive, and may not come with guarantees of robustness, post-processing techniques like randomized smoothing (Cohen et al., 2019), have been proposed as an alternative. Dombrowski et al. (2019) noted that models with softplus activations (y = 1/β log(1+exp (βx))) approximate smoothing, and in fact give an exact correspondence for singlelayer networks. Combining these insights, we arrive at Theorem 1, which suggests that the saliency map on a smoothed model approximates the closest boundary normal vector well; the similarity is inversely proportional to the standard deviation of the noise used to smooth the model.
Theorem 1 Let m(x) = ReLU(Wx) be a one-layer network and when using randomized smoothing, we writemσ(x). Let g(x) be the SM formσ(x) and suppose ∀x′′ ∈ B(x, ||x−x′||), ||g(x′′)|| ≥ c where x′ is the closest adversarial example, we have the following statement holds: ||g(x) − g(x′)|| ≤ λ where λ is monotonically decreasing w.r.t σ.
Theorem 1 suggests that when randomized smoothing is used, the normal vector of the closest decision boundary segment and the saliency map are similar, and this similarity increases with the smoothness of the model’s boundaries. We think the analytical form for deeper networks exists but its expression might be unnecessarily complex due that we need to recursively apply ReLU before computing the integral (i.e., the expectation). The analytical result above for one-layer network and empirical validations for deeper nets in Figure 11, if taken together, shows that attributions and boundary-based attributions are more similar in a smoothed model.
4 BOUNDARY-BASED ATTRIBUTION
Without the properties introduced by robust learning or randomized smoothing, the local gradient, i.e. saliency map, may not be a good approximation of decision boundaries. In this section, we build on the insights of our analysis to present a set of novel attribution methods that explicitly incorporate the normal vectors of nearby boundary segments. Importantly, these attribution methods can be applied to models that are not necessarily robust, to derive explanations that capture many of the beneficial properties of explanations for robust models.
Using the normal vector of the closest decision boundary to explain a classifier naturally leads to Definition 5, which defines attributions directly from the normal of the closest decision boundary.
Definition 5 (Boundary-based Saliency Map (BSM)) Given f and an input x, we define Boundary-based Saliency MapBS(x) as follows: BS(x) def = ∂fc(x
′)/∂x′, where x′ is the closest adversarial example to x, i.e. c = F (x) 6= F (x′) and ∀xm.||xm−x|| < ||x′−x|| → F (x) = F (xm).
Incorporating More Boundaries. The main limitation of using Definition 5 as a local explanation is obvious: the closest decision boundary only captures one segment of the entire decision surface. Even in a small network, there will be numerous boundary segments in the vicinity of a relevant point. Taking inspiration from Integrated Gradients, Definition 6 proposes the Boundary-based Integrated Gradient (BIG) by aggregating the attributions along a line between the input and its closest boundary segment.
Definition 6 (Boundary-based Integrated Gradient(BIG)) Given f , Integrated Gradient gIG and an input x, we define Boundary-based Integrated Gradient BS(x) as follows: BIG(x) := gIG(x;x′), where x is the nearest adversarial example to x, i.e. c = F (x) 6= F (x′) and ∀xm.||xm − x|| < ||x′ − x|| → F (x) = F (xm).
Geometric View of BIG. BIG explores a linear path from the boundary point to the target point. Because points on this path are likely to traverse different activation polytopes, the gradient of intermediate points used to compute gIG are normals of linear extensions of their local boundaries. As the input gradient is identical within a polytope Pi, the aggregate computed by BIG sums each gradient wi along the path and weights it by the length of the path segment intersecting with Pi. In other words, one may view IG as an exploration of the model’s global geometry that aggregates all boundaries from a fixed reference point, whereas BIG explores the local geometry around x. In the former case, the global exploration may reflect boundaries that are not particularly relevant to model’s observed behavior at a point, whereas the locality of BIG may aggregate boundaries that are more closely related (a visualization is shown in Fig. 1a).
Finding nearby boundaries. Finding the exact closest boundary segment is identical to the problem of certifying local robustness (Fromherz et al., 2021; Jordan et al., 2019; Kolter & Wong, 2018; Lee et al., 2020; Leino et al., 2021b; Tjeng et al., 2019; Weng et al., 2018), which is NP-hard for piecewise-linear models (Sinha et al., 2020). To efficiently find an approximation of the closest boundary segment, we leverage and ensemble techniques for generating adversarial examples, i.e. PGD (Madry et al., 2018), AutoPGD (Croce & Hein, 2020) and CW (Carlini & Wagner, 2017), and use the closest one found given a time budget. The details of our implementation are discussed in Section 5, where we show that this yields good results in practice.
5 EVALUATION
In this section, we first validate that the attribution vectors are more aligned to normal vectors of nearby boundaries in robust models(Fig. 3a). We secondly show that boundary-based attributions provide more “accurate” explanations – attributions highlight features that are actually relevant to the label – both visually (Fig. 4 and 5) and quantitatively (Table 1). Finally, we show that in a standard model, whenever attributions more align with the boundary attributions, they are more “accurate”.
General Setup. We conduct experiments over two data distributions, ImageNet (Russakovsky et al., 2015) and CIFAR-10 (Krizhevsky et al.). For ImageNet, we choose 1500 correctly-classified images from ImageNette (Howard), a subset of ImageNet, with bounding box area less than 80% of the original source image. For CIFAR-10, We use 5000 correctly-classified images. All standard and robust deep classifiers are ResNet50. All weights are pretrained and publicly available (Engstrom et al., 2019). Implementation details of the boundary search (by ensembling the results of PGD, CW and AutoPGD) and the hyperparameters used in our experiments, are included in Appendix B.2.
5.1 ROBUSTNESS→ BOUNDARY ALIGNMENT
In this subsection, we show that SM and IG better align with the normal vectors of the decision boundaries in robust models. For SM, we use BSM as the normal vectors of the nearest decision boundaries and measure the alignment by the `2 distance between SM and BSM following Proposition 1. For IG, we use BIG as the aggregated normal vectors of all nearby boundaries because
IG also incorporates more boundary vectors. Recently, Pan et al. (2021) also provides Adversarial Gradient Integral (AGI) as an alternative way of incorporating the boundary normal vectors into IG. We first use both BIG and AGI to measure how well IG aligns with boundary normals and later compare them in Sec. 5.2, followed by a formal discussion in Sec. 7.
Aggregated results for standard models and robust models are shown in Fig. 3a. It shows that adversarial training with bigger encourages a smaller difference between attributions and their boundary variants. Particularly, using `2 norm and setting = 3.0 are most effective for ImageNet compared to `∞ norm bound. One possible explanation is that the `2 space is special because training with `∞ bound may encourage the gradient to be more Lipschitz in `1 because of the duality between the Lipschitzness and the gradient norm, whereas `2 is its own dual.
5.2 BOUNDARY ATTRIBUTION→ BETTER LOCALIZATION
In this subsection, we show boundary attributions (BSM, BIG and AGI) better localize relevant features. Besides SM, IG and SG, we also focus on other baseline methods including Grad × Input (GTI) (Simonyan et al., 2013) and DeepLIFT (rescale rule only) (Shrikumar et al., 2017) that are reported to be more faithful than other related methods (Adebayo et al., 2018; 2020).
In an image classification task where ground-truth bounding boxes are given, we consider features within a bounding box as more relevant to the label assigned to the image. Our evaluation is performed over ImageNet only because no bounding box is provided for CIFAR-10 data. The metrics used for our evaluation are: 1) Localization (Loc.) (Chattopadhyay et al., 2017) evaluates the intersection of areas with the bounding box and pixels with positive attributions; 2) Energy Game (EG) (Wang et al., 2020a) instead computes the portion of attribute scores within the bounding box. While these two metrics are common in the literature, we propose the following additional metrics: 3)Positive Percentage (PP) evaluates the portion of positive attributions in the bounding box because a naive assumption is all features within bounding boxes are relevant to the label (we will revisit this assumption in Sec. 6); and 4) Concentration (Con.) sums the absolute value of attribution scores over the distance between the “mass” center of attributions and each pixel within the bounding box. Higher Loc., EG, PP and Con. are better results. We provide formal details for the above metrics in Appendix B.1.
We show the average scores for ResNet50 models in Table 1 where the corresponding boxplots can be found in Appendix B.4. BIG is noticeably better than other methods on Loc. EG, PP and Con. scores for both robust and standard models and matches the performance of SG on EG for a standard model. Notice that BSM is not significantly better than others in a standard model, which confirms our motivation of BIG – that we need to incorporate more nearby boundaries because a single boundary may not be sufficient to capture the relevant features.
We also measure the correlation between the alignment of SM and BSM with boundary normals and the localization abilities, respectively. For SM, we use BSM to represent the normal vectors of the boundary. For IG, we use AGI and BIG. For each pair X-Y in {SM-BSM, IG-AGI, IG-BIG}, we measure the empirical correlation coefficient between −||X− Y ||2 and the localization scores of X in a standard ResNet50 and the result is shown in Fig. 3b. Our results suggest that when the attribution methods better align with their boundary variants, they can better localize the relevant features in terms of the Loc. and EG. However, PP and Con. have weak and even negative correlations. One possible explanation is that the high PP and Con. of BIG and AGI compared to IG (as shown in Table 1) may also come from the choice of the reference points. Namely, compared to a zero vector, a reference point on the decision boundary may better filter out noisy features.
We end our evaluations by visually comparing the proposed method, BIG, against all other attribution methods for the standard ResNet50 in Fig. 4 and for the robust ResNet50 in Fig. 5, which demonstrates that BIG can easily and efficiently localize features that are relevant to the prediction. More visualizaitons can be found in the Appendix E.
Summary. Taken together, we close the loop and empirical show that standard attributions in robust models are visually more interpretable because they better capture the nearby decision boundaries. Therefore, the final take-away from our analytical and empirical results is if more resources are devoted to training robust models, effectively identical explanations can be obtained using much less costly standard gradient-based methods, i.e. IG.
6 DISCUSSION
Baseline Sensitivity. It is natural to treat that BIG frees users from the baseline selection in explaining non-linear classifiers. Empirical evidence has shown that IG is sensitive to the baseline inputs (Sturmfels et al., 2020). We compare BIG with IG when using different baseline inputs, white or black images. We show an example in Fig 6b. For the first two images, when using the baseline input as the opposite color of the dog, more pixels on dogs receive non-zero attribution scores. Whereas backgrounds always receive more attribution scores when the baseline input has the same color as the dog. This is because gIG(x)i ∝ (x− xb)i (see Def. 2) that greater differences in the input feature and the baseline feature can lead to high attribution scores. The third example further questions the readers using different baselines in IG whether the network is using the white dog to predict Labrador retriever. We demonstrate that conflicts in IG caused by the sensitivity to the baseline selection can be resolved by BIG. BIG shows that black dog in the last row is more important for predicting Labrador retriever and this conclusion is further validated by our counterfactual experiment in Appendix D. Overall, the above discussion highlights that BIG is significantly better than IG in reducing the non-necessary sensitivity in the baseline selection.
Limitations. We identify two limitations of the work. 1) Bounding boxes are not perfect groundtruth knowledge for attributions. In fact, we find a lot of examples where the bounding boxes either fail to capture all relevant objects or are too big to capture relevant features only. Fixing mislabeled bounding boxes still remain an open question and should benefit more expandability research in general. 2) Our analysis only targets on attributions that are based on end-to-end gradient computations. That is, we are not able to directly characterize the behavior of perturbation-based approaches, i.e. Mask (Fong & Vedaldi, 2017), and activation-based approaches, i.e. GradCAM (Selvaraju et al., 2017) and Feature Visualization (Olah et al., 2017).
7 RELATED WORK
Ilyas et al. (2019) shows an alternative way of explaining why robust models are more interpretable by showing robust models usually learn robust and relevant features, whereas our work serves as a geometrical explanation to the same empirical findings in using attributions to explain deep models. Our analysis suggests we need to capture decision boundaries in order to better explain classifiers,
whereas a similar line of work, AGI (Pan et al., 2021) that also involves computations of adversarial examples is motivated to find a non-linear path that is linear in the representation space instead of the input space compared to IG. Therefore, AGI uses PGD to find the adversarial example and aggregates gradients on the non-linear path generated by the PGD search. We notice that the trajectory of PGD search is usually extremely non-linear, complex and does not guarantee to return closer adversarial examples without CW or AutoPGD (see comparisons between boundary search approaches in Table B.2). We understand that finding the exact closest decision boundary is not feasible, but our empirical results suggest that the linear path (BIG) returns visually sharp and quantitative better results in localizing relevant features. Besides, a non-linear path should cause AGI fail to meet the symmetry axiom (Sundararajan et al., 2017) (see Appendix C for an example of the importance of symmetry for attributions). We further summarize the commons and differences in Table 6a.
In the evaluation of the proposed methods, we choose metrics related to bounding box over other metrics because for classification we are interested in whether the network associate relevant features with the label while other metrics (Adebayo et al., 2018; Ancona et al., 2017; Samek et al., 2016; Wang et al., 2020b; Yeh et al., 2019), e.g. infidelity (Yeh et al., 2019), mainly evaluates whether output scores are faithfully attributed to each feature. Our idea of incorporating boundaries into explanations may generalize to other score attribution methods, e.g. Distributional Influence (Leino et al., 2018) and DeepLIFT (Shrikumar et al., 2017). The idea of using boundaries in the explanation has also been explored by T-CAV (Kim et al., 2018), where a linear decision boundary is learned for the internal activations and associated with their proposed notion of concept.
When viewing our work as using nearby boundaries as a way of exploring the local geometry of the model’s output surface, a related line of work is NeighborhoodSHAP (Ghalebikesabi et al., 2021), a local version of SHAP (Lundberg & Lee, 2017). When viewing our as a different use of adversarial examples, some other work focuses on counterfactual examples (semantically meaningful adversarial examples) on the data manifold (Chang et al., 2019; Dhurandhar et al., 2018; Goyal et al., 2019).
8 CONCLUSION
In summary, we rethink the target question an explanation should answer for a classification task, the important features that the classifier uses to place the input into a specific side of the decision boundary. We find the answer to our question relates to the normal vectors of decision boundaries in the neighborhood and propose BSM and BIG as boundary attribution approaches. Empirical evaluations on STOA classifiers validate that our approaches provide more concentrated, sharper and more accurate explanations than existing approaches. Our idea of leveraging boundaries to explain classifiers connects explanations with the adversarial robustness and help to encourage the community to improve model quality for explanation quality.
A THEOREMS AND PROOFS
A.1 PROOF OF PROPOSITION 1
Proposition 1 Suppose that f has a (λ, δ)-robust saliency map gS at x, x′ is the closest point on the closest decision boundary segment to x and ||x′ − x|| ≤ δ, and that n is the normal vector of that boundary segment. Then ||n− gS(x)|| ≤ λ||x− x′||. To compute n can be efficiently computed by taking the derivatice of the model’s output w.r.t to the point that is on the decision boundary such that n = ∂f(x
′) ∂x′ and ∀xm ∈ R
d, F (xm) = F (x) if ||xm − x|| ≤ ||x′ − x||. Because we assume ||x − x′|| ≤ δ, and the model has (λ, δ)-robust Saliency Map, then by Def. 4 we have
||n− gS(x)|| ≤ λ||x− x′||
A.2 PROOF OF THEOREM 1
Theorem 1 Let m(x) = ReLU(Wx) be a one-layer network and when using randomized smoothing, we write mσ(x). Let g(x) be the SM for mσ(x) and suppose ∀x′′ ∈ B(x, ||x − x′||), ||g(x′′)|| ≥ c where x′ is the closest adversarial example, we have the following statement holds: ||g(x)− g(x′)|| ≤ λ where λ is monotonically decreasing w.r.t σ.
Proof:
We begin our proof by firstly introducing Randomized Smoothing.
Definition 7 (Randomized Smoothing (Cohen et al., 2019)) Suppose F (x) = argmaxc fc(x), the smoothed classifier G(x) is defined as
G(x) := argmax c Pr [F (x+ ) = c] (1)
where ∼ N (0, σ2I)
Now the rest of the proof of is three-fold: 1) firstly we will show that there exist a non-linear activation function Er(x) such that the output of the smoothed ReLU network mσ(x) is equivalent when replacing the ReLU activation with Er activation; 2) secondly derive the difference between the saliency map of the network with Er activation; and 3) lastly, we show that the difference between SM and BSM of the network with Er activation is bounded, which is inversely proportional to the standard deviation used to create the smoothed ReLU network mσ(x).
(1) Step I: Error activation (Er) function and randomized smoothing. 1
Randomized Smoothing creates a smoothed model that returns whichever the label that the base classifier most likely to return under the perturbation generated by the Gaussian noise. Now we take a look at the output of each class under the Gaussian noise. Consider yi is the output of the i-th class of the network ReLU(Wx), that is
yi = E ∼N (0,σ2I)ReLU(w>i (x+ )) (2)
To simplify the notation, we denote E ∼N (0,σ2I) as E. We expand Equation (2):
yi = E [ ReLU(w>i x+w > i ) ] = E [ReLU(u+ ′)] (3)
where we denote u = w>i x and ′ = w>i . u is a scalar and ′ follows a zero-centered univariate Gaussian with the standard deviation s ∝ σ because the dot production between the constant weight vector wi and the random vector can be seen as a linear combination of each dimension of and the covariance between each dimension of is 0 for the Gaussian noise used for randomized smoothing
1We appreciate the discussion with the author Pan Kessel of Dombrowski et al. (2019) for the derivations from Equation (6) to (7)
in the literature (Cohen et al., 2019). By expanding the expectation symbol to its integral form, we obtain:
yi = 1
s √ 2π ∫ ∞ −∞ exp(− ′2 2s2 )ReLU(u+ ′)d ′ (4)
Let τ = u+ ′ and notice that ReLU(τ) = 0 if τ < 0, the equation above can be rewritten as:
yi = 1
s √ 2π ∫ ∞ 0 exp(− (τ − u) 2 2s2 )τdτ (5)
= 1√ 2π exp(− u 2 2s2 )s+ u 2
[ 1 + Erf(
u√ 2s )
] (6)
(7)
where Erf is the error function defined as Erf(x) = 2√ π ∫ x 0 exp(−t2)dt. We therefore define an Er activation for an input u with the standard deviation s as
Er(u; s) = 1√ 2π exp(− u 2 2s2 )s+ u 2
[ 1 + Erf(
u√ 2s )
] (8)
and we show that
yi = E ∼N (0,σ2I) [ ReLU(w>i (x+ )) ] = Er(w>i x; s(σ)) (9)
That is, to analyze the gradient of the output for a smoothed model w.r.t the input, we can alternatively analyze the gradient of an equivalent Er network. We plot three examples of the Er activations in Fig. 7 for the readers to see what does the function look like.
2) Step II: the Saliency Map for an Er network.
By the definition of Saliency map (Def. 1) and the chain rule, we have:
SM(x) = ∂yi ∂x = ∂yi ∂u ∂u ∂x (Let u = w>i x) (10)
= ∂
∂u (Er(u; s)) ·wi (11)
= 1
2
[ 1 + Erf(
u√ 2s )
] ·wi (12)
The transition between Equation (11) to (12) is based on the fact that the derivative of Erf(x) is 2√ π exp(−x2).
3) Step III: the difference between SM and BSM for an Er network.
Let x′ be the closest point on the decision boundary for the smoothed classifiermσ and ||x−x′|| = r (for the closed-form expression of r, see Cohen et al. (2019)). Based on the definition of BSM, we have
BSM(x) = ∂yi(x
′)
∂x′ =
1
2
[ 1 + Erf(
u′√ 2s )
] ·wi, u′ = w>i x′ (13)
The difference between SM and BSM therefore is computed as
||BSM(x)− SM(x)|| = ||1 2
[ 1 + Erf(
u′√ 2s )
] ·wi − 1
2
[ 1 + Erf(
u√ 2s )
] ·wi|| (14)
= 1 2 |Erf( u ′ √ 2s )− Erf( u√ 2s )| · ||wi|| (15)
≤ 1 2
[ |Erf( u
′ √ 2s )|+ |Erf( u√ 2s
)| ] · ||wi|| (Triangle Inequality) (16)
We notice that the u′ is bounded because u′ = w>i x ′ ≤ ||wi|| · ||x′|| ≤ ||wi|| · (||wi|| + r) and similarly for u such that u = w>i x ≤ ||wi|| · (||x||+ r). Because Erf function is increasing w.r.t the input and s > 0, we arrive at the following inequality:
||BSM(x)− SM(x)|| ≤ λ (17)
where
λ = Erf( ||wi|| · (||x||+ r)√
2s ) · ||wi|| (18)
We take the absolute symbol out because the output of an Erf is positive when its input is positive. Now, given that ||wi||, r and ||x|| are constants when for a given input x, the upper-bound Erf( ||wi||·(||x||+r)√
2s ) · ||wi|| is monotonically increasing as s decreases. From the Step I, we know
that s ∝ σ, therefore we prove there exist an upper-bound λ of the difference between the SM and BSM for a smoothed classifier and λ is monotonically decreasing w.r.t the standard deviation of the Gaussian noise.
B EXPERIMENT DETAILS AND ADDITIONAL RESULTS
B.1 METRICS WITH BOUNDING BOXES
We will use the following extra notations in this section. Let X , Z and U be a set of indices of all pixels, a set of indices of pixels with positive attributions, and a set of indices of pixels inside the bounding box for a target attribution map g(x). We denote the cardinality of a set S as |S|.
Localization (Loc.) (Chattopadhyay et al., 2017) evaluates the intersection of areas with the bounding box and pixels with positive attributions.
Definition 8 (Localization) For a given attribution map g(x), the localization score (Loc.) is defined as
Loc := |Z ∩ U |
|U |+ |Z ∩ (X \ U)| (19)
Energy Game (EG) (Wang et al., 2020a) instead evaluates computes the portion of attribute scores within the bounding box.
Definition 9 (Energy Game) For a given attribution map g(x), the energy game EG is defined as
EG := ∑ i∈Z∩U g(x)i∑
i∈X max(g(x)i, 0) (20)
Positive Percentage (PP) evaluates the sum of positive attribute scores over the total (absolute value of) attribute scores within the bounding box.
Definition 10 (Positive Percentage) Let V be a set of indices pf all pixels with negative attribution scores, for a given attribution map g(x), the positive percentage PP is defined as
PP := ∑ i∈Z∩U g(x)i∑
i∈Z∩U g(x)i − ∑ i∈V ∩U g(x)i
(21)
Concentration (Con.) evaluates the sum of weighted distances by the “mass” between the “mass” center of attributions and each pixel within the bounding box. Notice that the computation of cx and cy can be computed with scipy.ndimage.center of mass. This definition encourages that pixels with high absolute value of attribution scores to be closer to the mass center.
Definition 11 (Concentration) For a given attribution map g(x), the concentration Con. is defined as follws
Con. := ∑ i∈U ĝ(x)i/ √ (ix − cx)2 + (iy − cy)2 (22)
where ĝ is the normalized attribution map so that ĝi = gi/ ∑ i∈U |gi|. ix, iy are the coordinates of the pixel and
cx = ∑ i∈U ixĝ(x)i∑ i∈U ĝ(x)i , cy = ∑ i∈U iy ĝ(x)i∑ i∈U ĝ(x)i
(23)
Besides metrics related to bounding boxes, there are other metrics in the literature used to evaluate attribution methods (Adebayo et al., 2018; Ancona et al., 2017; Samek et al., 2016; Wang et al., 2020b; Yeh et al., 2019). We focus on metrics that use provided bounding boxes, as we believe that they offer a clear distinction between likely relevant features and irrelevant ones.
B.2 IMPLEMENTING BOUNDARY SEARCH
Our boundary search uses a pipeline of PGDs, CW and AutoPGD. Adversarial examples returned by each method are compared with others and closer ones are returned. If an adversarial example is not found, the pipeline will return the point from the last iteration of the first method (PGDs in our case). Hyper-parameters for each attack can be found in Table 2. The implementation of PGDs and CW are based on Foolbox (Rauber et al., 2020; 2017) and the implementation of AutoPGD is based
on the authors’ public repository2 (we only use apgd-ce and apgd-dlr losses for efficiency reasons). All computations are done using a GPU accelerator Titan RTX with a memory size of 24 GB. Comparisons on the results of the ensemble of these three approaches are shown in Fig. 10a.
B.3 HYPER-PARAMETERS FOR ATTRIBUTION METHODS
All attributions are implemented with Captum (Kokhlikyan et al., 2020) and visualized with Trulens (Leino et al., 2021a). For BIG and IG, we use 20 intermediate points between the baseline and the input and the interpolation method is set to riemann trapezoid. For AGI, we base on the authors’ public repository3. The choice of hyper-paramters follow the default choice from the authors for ImageNet and we make minimal changes to adapt them to CIFAR-10 (see Fig. 10b).
To visualize the attribution map, we use the HeatmapVisualizer with blur=10, normalization type="signed max" and default values for other keyword arguments from Trulens.
B.4 DETAILED RESULTS ON LOCALIZATION METRICS
We show the average scores for each localizaiton metrics in Sec. 5. We also show the boxplots of the scores for each localization metrics in Fig. 8 for the standard ResNet50 model and Fig. 9 for the robust ResNet50 (`2|3.0). All higher scores are better results.
2https://github.com/fra31/auto-attack 3https://github.com/pd90506/AGI
B.5 ADDITIONAL COMPARISON WITH AGI
We additionally compare the localization ability of relevant features between BIG and AGI if we only use PGDs to return closest boundary points, that is we recursively increase the norm bound and perform PGD attack until the first time we succeed to find an adversarial point. We denote this approach as BIGp. Note that BIGp is still different from AGI by the type of the path, i.e. lines
and curves, over which the integral is performed. That is AGI also aggregates the path integral starting from a set of adversarial points found by the targeted PGD attack, where BIGp starts from the adversarial pointed returned by untargeted PGD attack. We use the same parameters for both PGD and AGI from Fig. 2 and we run the experiments over the same dataset used in Sec. 5.1. For reference, we also include the results of IG. The results are shown in Table. 3. We notice that after removing CW and AutoPGD, BIGp actually performs better than AGI, and even slightly better than BIG for the robust model. One reason to explain the tiny improvement from BIGp might be that for a robust network, the gradient at each iteration of the PGD attack is more informative and less noisy compared to a standard model so that the attack can better approximate the closest decision boundary. The results in Table. 3 therefore demonstrates that BIG and BIGp are able to localize relevant features better than AGI.
B.6 ADDITIONAL LOCALIZATION METRIC
Besides the localization metrics used in Sec. 5.1, we discuss an additional localization metric frequently used for evaluating attention and CAM-based explanations: Top1-Loc Choe & Shim (2019); Aggarwal et al. (2020). Top1-Loc is calculated as follows: an instance is considered as Top1-Loc correct given an attribution if 1) the prediction is Top1-correct; and 2) GT-Loc correct – namely, the IoU of the ground-truth bounding box and area highlighted by the attribution is more than 50 %. When only using the images that are Top1-correct, Top1-Loc reduces to GT-Loc. Top1-Loc is different from other localization metrics used for evaluating attribution methods because it takes the prediction behavior of the target model into the account, which in general is not an axiom when motivating a gradient-based attribution method. In the previous evaluations, we are only interested in images that the model makes correct Top1 predictions, in this section we will use the same images that are true-positives. In this case, Top1-Loc accuracy reduces to GT-Loc accuracy, and so we measure the GT-Loc directly. To determine the which part of the image is highlighted by the attribution, we compute a threshold for each attribution map and a pixel is considered within the highlight region if and only if the attribution score is higher than the threshold. For a given attribution map, we consider a threshold value t as the q-th percentile for the absolute values of attribution scores. We plot the GT-Loc accuracy against q in Fig. 13. We notice that attention-based and CAM-based attributions usually produce a cloud-like visualization because of the blurring technique or upsample layers used to compute the results. To ensure GT-Loc will work from gradient-based attributions we are interested in this paper, we also include results (Fig. 14) where we apply a Gaussian Blur (σ = 3.0) to the attribution map first before calculating the GT-Loc accuracy. The results are aggreated over 1500 images from ImageNette on a standard ResNet50 and a robust ResNet50, respectively. Higher GT-Loc scores are better.
Behavior of BIG. The results in Fig. 13 and 14 show that BIG is better than all other attributions on standard models excluding SG and uniformly better including SG on a robust model. Before we provide some explanations about the behaviors of SG (green curves) on standard models in the next paragraph, we also observe that: 1) blurring only changes the GT-Loc scores but not the overral rankings across attributions; 2) a threshold corresponding to a percentile near 40% provides the best GT-Loc scores for all methods; 3) gradient-based attributions generally provide worse GT-Loc (or Top1-Loc) scores compared to CAM-based and attention-based approaches in the literature Choe &
Shim (2019); Aggarwal et al. (2020), which is not surprising because gradient-based approaches are usually axiomatically-justified to be faithful to the model. Therefore, it is expected that the model will more or less learn spurious features from the input, which makes the gradient-based attributions noisy than attention and CAM-based ones. Therefore, when localizing relevant features, users may want to consult activation-based approaches, i.e. CAMs, but when debugging and ensuring the network learns less spurious and irrelevant features, users should instead use gradient-based approaches because of the axioms behind these approaches.
Behavior of SG in Standard Models. SG is uniformly better than all other approaches in terms of the Gt-Loc accuracies on a standard model, which is surprising but not totally unexpected. We beleive the reason behind this result is that, SG is actually the gradient from a smoothed counterpart of the standard model (see discussions near Theorem 1), which is more robust. Therefore, it does not seem to be an apple-to-apple comparison between SG and other approaches because it can be less faithful to the standard model – namely SG is more faithful to the smoothed classifier. That is very likely why SG is worse than BIG in Fig. 13b and 14b when the smoothing technique becomes marginal for improving the robustness for a model that has already been robustly trained.
B.7 SANITY CHECK FOR BIG
We perform Sanity Checks for BIG using Rank Order Correlations between the absolute values of BIGs when randomizing the weights from the top layer to the bottom (Adebayo et al., 2018). To ensure the output of the model does not become NaN, when randomizing the weights of each trainable layer, we ensure that we replace the weight matrix with a random matrix with the same norm as follows.
1 def _randomized_models(): 2 all_parameters = [] 3 for param in model.parameters(): 4 all_parameters.append(param) 5 for step, param in enumerate(all_parameters[::-1]): 6 random_w = torch.randn_like(param) 7 ## we make sure the randomized weights have the same norm to prevent the network to output nan results 8 param.data = torch.nn.parameter.Parameter( 9 random_w * torch.norm(param.data) / torch.norm(random_w.data))
10 if step % num_blocks == 0 or step == len(all_parameters): 11 yield model 12
For each iteration, we continuously replace randomized 5 layers in the reversed sequence returned by model.parameters() and the results are plotted in Fig. 15. We consider BIG passes the sanity check as the results are similar compared with the top row of Fig 4 in Adebayo et al. (2018).
B.8 ADDITIONAL EXPERIMENT WITH SMOOTHED GRADIENT
Theorem 1 demonstrates that for a one-layer network, as we increase the standard deviation σ of the Gaussian distribution used for creating the smoothed model mσ (Cohen et al., 2019), the difference between the saliency map and the boundary-based saliency map computed in mσ is bounded by a constant λ, which is monotonically decreasing w.r.t σ. That is, greater σ will produce a smoothed model, where the saliency map (SM) explanation of mσ is a good approximation for the boundarybased saliency map (BSM). However, as the depth of the deep network increases, a closed-form analysis may be difficult to derive. Therefore, in this section, we aim to empirically validate that the take-away from Theorem 1 should generalize to deeper networks.
Computing SM for mσ . One practical issue to compute any gradient-related explanations for the smoothed model mσ is that mσ is defined in an integral form, which can not be directly built with tf.keras. However, Theorem 2 shows that the smoothed gradient of the original model m is equivalent to the saliency map of the smoothed model mσ . Namely, the order of smoothing and integral is exchangeable when computing the gradient.
Theorem 2 (Proposition 1 from Wang et al. (2020c)) Suppose a model f(x) satisfies max |f(x)| <∞. For Smoothed Gradient gSG(x), we have
gSG(x) = ∂(f ~ q)(x)
∂x (24)
where q(x) = N (0, σ2I) and ~ denotes the convolution operation.
Computing BSM for mσ . Another practical issue is computing the decision boundary for a smoothed modelmσ is not defined in a deterministic way as randomized smoothing provides a probabilistic guarantee. In this paper, we do the following steps to approximate the decision boundary of a smoothed model. To generate the adversarial examples for the smoothed classifier of ResNet50 with randomized smoothing, we need to compute back-propagation through the noises. The noise sampler is usually not accessible to the attacker who wants to fool a model with randomized smoothing. However, our goal in this section is not to reproduce the attack with similar setup in practice, instead, what we are after is the point on the boundary. We therefore do the noise sampling prior to run PGD attack, and we use the same noise across all the instances. The steps are listed as follows:
1. We use numpy.random.randn as the sampler for Gaussian noise with its random seed set to 2020. We use 50 random noises per instance.
2. In PGD attack, we aggregate the gradients of all 50 random inputs before we take a regular step to update the input.
3. We set = 3.0 and we run at most 40 iterations with a step size of 2 ∗ /40.
4. The early stop criteria for the loop of PGD is that when less than 10% of all randomized points have the original prediction.
5. When computing Smooth Gradient for the original points or for the adversarial points, we use the same random noise that we generated to approximate the smoothed classifier.
Results. We run the experiment with 500 images from ImageNet on ResNet50 as this computation is significantly more expensive than previous experiments. We compute the `2 distances between SM and BSM obtained from the steps above for several values as shown in Fig. 11. Notably, the trend of the log difference against the standard deviation σ used for the Gaussian noise validates that the qualitative meaning of Theorem 1 holds even for large networks. That is, when the model becomes more smoothed, saliency map explanation is a good approximation for the boundary-based saliency map.
C SYMMETRY OF ATTRIBUTION METHODS
Sundararajan et al. (2017) prove that a linear path is the only path integral that satisifes symmetry; that is, when two features’ orders are changed for a network that is not using any order information from the input, their attribution scores should not change. One simple way to show the importance of symmetry by the following example and we refer Sundararajan et al. (2017) to readers for more analysis.
Example 1 Consider a function f(x, y) = min(x, y) and to attribute the output of f to the inputs at x = 1, y = 1 we consider a baseline x = 0, y = 0. An example non-linear path from the baseline to the input can be (x = 0, y = 0) → (x = 1, y = 0) → (x = 1, y = 1). On this path, f(x, y) = min(x, y) = y after the point (x = 1, y = 0); therefore, gradient integral will return 0 for the attribution score of x and 1 for y (we ignore the infinitesimal part of (x = 0, y = 0)→ (x = 1, y = 0)). Similarly, when choosing a path (x = 0, y = 0) → (x = 0, y = 1) → (x = 1, y = 1), we find x is more important. Only the linear path will return 1 for both variables in this case.
D COUNTERFACTUAL ANALYSIS IN THE BASELINE SELECTION
The discussion in Sec. 6 shows an example where there are two dogs in the image. IG with black baseline shows that the body of the white dog is also useful to the model to predict its label and the black dog is a mix: part of the black dog has positive attributions and the rest is negatively contribute to the prediction. However, our proposed method BIG clearly shows that the most important part is the black dog and then comes to the white dog. To validate where the model is actually using the white dog, we manually remove the black dog or the white dog from the image and see if the model retain its prediction. The result is shown in Fig. 12. Clearly, when removing the black dog, the model changes its prediction from Labrador retriever to English foxhound while removing the white dog does not change the prediction. This result helps to convince the reader that
BIG is more reliable than IG with black baseline in this case as a more faithful explanation to the classification result for this instance.
E ADDITIONAL VISUALIZATIONS FOR BIG
More visualizations comparing BIG with other attributions can be found in Fig. 16 and 17. We show several examples in Fig. 18 when there are more than one objects in the input and we explain the model’s Top1 prediction, where we show that BIG is able to localize the objects that are actually relevant to the predicted label. | 1. What is the main contribution of the paper regarding explanation techniques for deep classifiers?
2. What are the strengths of the proposed methods, particularly in their ability to achieve better explanations?
3. How does the reviewer assess the trade-off between smoothening the learned function and its discrimination ability?
4. Does the reviewer have any concerns regarding the faithfulness of smoothgrad as an explanation method?
5. Are there any inconsistencies in the text regarding the meaning of "boundary segments"?
6. How reliable are the observations in the experiments section, considering the approximation using an ensemble of adversarial example methods?
7. Could the fact that BSM does not show improvement on standard models be due to this approximation?
8. Are there any minor points regarding notation or wording in the paper that could be improved? | Summary Of The Paper
Review | Summary Of The Paper
The idea is that as one smoothens the decision boundary of a piecewise linear function
f
(e.g., from ReLU-Net) its saliency map (
g
) obtained by
g
=
d
f
d
x
gets closer (
|
|
g
−
n
|
|
2
) to the normal of the closest boundary hyperplane (
n
). The authors then propose two variants of explanation techniques based on the nearest decision boundary hyperplane and try it on explaining trained image deep classifiers. The results seem to corroborate as for a better alignment with the nearest boundary hyperplane's normal. Also, the proposed methods achieve better explanations as measured by locality and overlap with ground-truth bounding boxes.
Review
Strengths
the general idea of alignment with the nearest decision hyperplane normal and the specific modification of IG seem quite novel and plausible.
In the experiments, BIG achieves significantly better results using various explanation metrics.
Questions to authors
Smoothening the learnt function, at some point, should start losing the discrmination ability of the learnt function. Has the authors pushed enough to find some indication of this trade-off?
From theorem 1, I can understand why a smoother learned functions can give rise to a more faithful saliency-based explanation but I cannot see how it advocates smoothgrad as explanation. Wouldn't smoothgrad be faithful to a very-likely different function than the actual learned function and thus not necessarily faithful to the true learned function?
The text before definition 6, argues for BIG based on the existence of multiple boundary segments near a point and proposed definition 6 that integrates over the segment connecting a point x to its nearest adversarial
x
′
. However, shouldn't the nearest decision boundary segment for all points along the line segment
x
→
x
′
remain the same? The integral is taken over standard saliency g which of course can change linear regions but the rationale (of wanting to find different decision boundary hyperplanes) does not seem to hold for the proposal.
The previous question could be simply rectified if the meaning of "boundary segments" is the linear regions' boundary segments as opposed to the decision boundary segments but then I think "boundary" has been used as "decision boundary" at occasions before this definition, e.g., in def 5. Am I mistaken? If not, the text needs a rewrite to distinguish between "regions boundary segments" and "decision boundary segments".
Due to the approximation using an ensemble of adversarial example methods, we should expect that the found segment is very likely not the closest decision boundary segment (since we know from many works that the density of linear regions are extremely high in the input space). In light of that, how reliable are the observations in the experiments section? Especially, with regards to the deviation from the normal vector (Figure 3.a).
Following up on the previous question, could the fact that BSM does not show improvement on standard models be due to this approximation?
Minor points
on many occasions, when referring to boundary facets of a polytope, better to use hyperplane as opposed to segment to avoid confusion with line segments that are used as linear path.
In definition 3,
f
(
α
+
ϵ
)
→
f
(
x
+
ϵ
)
In Theorem 1,
∀
x
″
∈
B
(
.
.
.
)
.
better to replace
.
→
,
(although a minor point, it makes reading the statement challenging in the first glance. )
In Theoretm 1, it might be better to use
O
(
1
σ
c
)
?
two different notations are used for definition (:= or else)
better to refer to Figure 3.a and 3.b as tables
page 7: "a smaller difference between the difference between attributions"
page 7: "instead evaluates computes"
page 8: "It is naturally to treat BIG frees users from the baseline selection" |
ICLR | Title
Robust Models Are More Interpretable Because Attributions Look Normal
Abstract
Recent work has found that adversarially-robust deep networks used for image classification are more interpretable: their feature attributions tend to be sharper, and are more concentrated on the objects associated with the image’s groundtruth class. We show that smooth decision boundaries play an important role in this enhanced interpretability, as the model’s input gradients around data points will more closely align with boundaries’ normal vectors when they are smooth. Thus, because robust models have smoother boundaries, the results of gradientbased attribution methods, like Integrated Gradients and DeepLift, will capture more accurate information about nearby decision boundaries. This understanding of robust interpretability leads to our second contribution: boundary attributions, which aggregate information about the normal vectors of local decision boundaries to explain a classification outcome. We show that by leveraging the key factors underpinning robust interpretability, boundary attributions produce sharper, more concentrated visual explanations—even on non-robust models.
1 INTRODUCTION
Feature attribution methods are widely used to explain the predictions of neural networks (Binder et al., 2016; Dhamdhere et al., 2019; Fong & Vedaldi, 2017; Leino et al., 2018; Montavon et al., 2015; Selvaraju et al., 2017; Shrikumar et al., 2017; Simonyan et al., 2013; Smilkov et al., 2017; Springenberg et al., 2014; Sundararajan et al., 2017). By assigning an importance score to each input feature of the model, these techniques help to focus attention on parts of the data most responsible for the model’s observed behavior. Recent work (Croce et al., 2019; Etmann et al., 2019) has observed that feature attributions in adversarially-robust image models, when visualized, tend to be more interpretable—the attributions correspond more clearly to the discriminative portions of the input.
One way to explain the observation relies on the fact that robust models do not make use of nonrobust features (Ilyas et al., 2019) whose statistical meaning can change with small, imperceptible changes in the source data. Thus, by using only robust features to predict, these models naturally tend to line up with visibly-relevant portions of the image. Etmann et al. take a different approach, showing that the gradients of robust models’ outputs more closely align with their inputs, which explains why attributions on image models are more visually interpretable.
In this paper, we build on this geometric understanding of robust interpretability. With both analytical (Sec. 3) and empirical (Sec. 5) results, we show that the gradient of the model with respect to its input, which is the basic building block of all gradient-based attribution methods, tends to be more closely aligned with the normal vector of a nearby decision boundary in robust models than in “normal” models. Leveraging this understanding, we propose Boundary-based Saliency Map (BSM) and Boundary-based Integrated Gradient (BIG), two variants of boundary attributions (Sec. 4), which base attributions on information about nearby decision boundaries (see an illustration in Fig. 1a). While BSM provides theoretical guarantees in the closed-form, BIG generates both quantitatively and qualitatively better explanations. We show that these methods satisfy several desireable formal properties, and that even on non-robust models, the resulting attributions are more focused (Fig. 1b) and less sensitive to the “baseline” parameters required by some attribution methods.
To summarize, our main contributions are as follows. (1) We present an analysis that sheds light on the previously-observed phenomeon of robust interpretability showing that alignment between the normal vectors of decision boundaries and models’ gradients is a key ingredient (Proposition 1).
In particular, we derive a closed-form result for one-layer networks (Theorem 1) and empirically validate the take-away of our theorem generalizes to deeper networks. (2) Motivated by our analysis, we introduce boundary attributions, which leverage the connection between boundary normal vectors and gradients to yield explanations for non-robust models that carry over many of the favorable properties that have been observed of explanations on robust models. (3) We empirically demonstrate that one such type of boundary attribution, called Boundary-based Integrated Gradients (BIG), produces explanations that are more accurate than prior attribution methods (relative to ground-truth bounding box information), while mitigating the problem of baseline sensitivity that is known to impact applications of Integrated Gradients Sundararajan et al. (2017) (Section 6).
2 BACKGROUND
We begin by introducing our notations. Throughout the paper we use italicized symbols x to denote scalar quantities and bold-face x to denote vectors. We consider neural networks with ReLU as activations prior to the top layer, and a softmax activation at the top. The predicted label for a given input x is given by F (x) = argmaxc fc(x),x ∈ Rd, where F (x) is the predicted label and fi(x) is the output on the class i. As the softmax layer does not change the ranking of neurons in the top layer, we will assume that fi(x) denotes the pre-softmax score. Unless otherwise noted, we use ||x|| to denote the `2 norm of x, and the `2 neighborhood centered at x with radius as B(x, ).
Explainability. Feature attribution methods are widely-used to explain the predictions made by DNNs, by assigning importance scores for the network’s output to each input feature. Conventionally, scores with greater magnitude indicate that the corresponding feature was more relevant to the predicted outcome. We denote feature attributions by z = g(x, f), z,x ∈ Rd. When f is clear from the context, we simply write g(x). While there is an extensive and growing literature on attribution methods, our analysis will focus closely on the popular gradient-based methods, Saliency Map (Simonyan et al., 2013), Integrated Gradient (Sundararajan et al., 2017) and Smooth Gradient (Smilkov et al., 2017), shown in Defs 1-3.
Definition 1 (Saliency Map (SM)) The Saliency Map gS(x) is given by gS(x) := ∂f(x)∂x .
Definition 2 (Integrated Gradient (IG)) Given a baseline input xb, the Integrated Gradient gIG(x;xb) is given by gIG(x;xb) := (x− xb) ∫ 1 0 ∂f((x−xb)t+xb) ∂x dt.
Under review as a conference paper at ICLR 2022
#$
#$
%
%
!! !"
"
#$ %
"!
""
"#
Definition 3 (Smooth Gradient (SG)) Given a zero-centered Gaussian distributionN with a standard deviation σ, the Smooth Gradient gSG(x;σ) is given by gSG(x;σ) := E ∼N (0,σ2I) ∂f(α+ )∂x .
Besides, we will also include results from DeepLIFT (Shrikumar et al., 2017) and grad × input (element-wise multiplication between Saliency Map and the input) (Simonyan et al., 2013) in our empirical evaluation. As we show in Section 3.2, Defs 1-3 satisfy axioms that relate to the local linearity of ReLU networks, and in the case of randomized smoothing (Cohen et al., 2019), their robustness to input perturbations. We further discuss these methods relative to others in Sec. 7.
Robustness. Two relevant concepts about adversarial robustness will be used in this paper: prediction robustness that the model’s output label remains unchanged within a particular `p norm ball and attribution robustness that the feature attributions are similar within the same ball. Recent work has identified the model’s Lipschitz continuity as a bridge between these two concepts (Wang et al., 2020c) and some loss functions in achieving prediction robustness also bring attribution robustness (Chalasani et al., 2020). We refer to robustness as prediction robustness if not otherwise noted.
3 EXPLAINABILITY, DECISION BOUNDARIES, AND ROBUSTNESS
In this section, we begin by discussing the role of decision boundaries in constructing explanations of model behavior via feature attributions. We first illustrate the key relationships in the simpler case of linear models, which contain exactly one boundary, and then generalize to piecewise-linear classifiers as they are embodied by deep ReLU networks. We then show how local robustness causes attribution methods to align more closely with nearby decision boundaries, leading to explanations that better reflect these relationships.
3.1 ATTRIBUTIONS FOR LINEAR MODELS
Consider a binary classifier C(x) = sign(w>x + b) that predicts a label {−1, 1} (ignoring “tie” cases where C(x) = 0, which can be broken arbitrarily). In its feature space, C(x) is a hyperplane H that separates the input space into two open half-spaces S1 and S2 (see Fig. 2a). Accordingly, the normal vector n̂ of the decision boundary is the only vector that faithfully explains the model’s classification while other vectors, while they may describe directions that lead to positive changes in the model’s output score, are not faithful in this sense (see v in Fig. 2a for an example). In practice, to assign attributions for predictions made by C, SM, SG, and the integral part of IG (see Sec. 2) return a vector characterized by z = k1n̂ + k2 (Ancona et al., 2018), where k1 6= 0 and k2 ∈ R, regardless of the input x that is being explained. In other words, these methods all measure the importance of features by characterizing the model’s decision boundary, and are equivalent up to the scale and position of n̂.
3.2 GENERALIZING TO PIECEWISE-LINEAR BOUNDARIES
In the case of a piecewise-linear model, such as a ReLU network, the decision boundaries comprise a collection of hyperplane segments that partition the feature space, as in H1, H2 and H3 in the example shown in Figure 2b. Because the boundary no longer has a single well-defined normal, one intuitive way to extend the relationship between boundaries and attributions developed in the previous section is to capture the normal vector of the closest decision boundary to the input being explained. However, as we show in this section, the methods that succeeded in the case of linear models (SM, SG, and the integral part of IG) may in fact fail to return such attributions in the more general case of piecewise-linear models, but local robustness often remedies this problem. We begin by reviewing key elements of the geometry of ReLU networks (Jordan et al., 2019).
ReLU activation polytopes. For a neuron u in a ReLU network f(x), we say that its status is ON if its pre-activation u(x) ≥ 0, otherwise it is OFF. We can associate an activation pattern denoting the status of each neuron for any point x in the feature space, and a half-space Au to the activation constraint u(x) ≥ 0. Thus, for any point x the intersection of the half-spaces corresponding to its activation pattern defines a polytope P (see Fig. 2b), and within P the network is a linear function such that ∀x ∈ P, f(x) = w>Px + bP , where the parameters wp and bP can be computed by differentiation (Fromherz et al., 2021). Each facet of P (dashed lines in Fig. 2b) corresponds to a boundary that “flips” the status of its corresponding neuron. Similar to activation constraints, decision boundaries are piecewise-linear because each decision boundary corresponds to a constraint fi(x) ≥ fj(x) for two classes i, j (Fromherz et al., 2021; Jordan et al., 2019). Gradients might fail. Saliency maps, which we take to be simply the gradient of the model with respect to its input, can thus be seen as a way to project an input onto a decision boundary. That is, a saliency map is a vector that is normal to a nearby decision boundary segment. However, as others have noted, a saliency map is not always normal to any real boundary segment in the model’s geometry (see the left plot of Fig. 2c), because when the closest boundary segment is not within the activation polytope containing x, the saliency map will instead be normal to the linear extension of some other hyperplane segment (Fromherz et al., 2021). In fact, iterative gradient descent typically outperforms the Fast Gradient Sign Method (Goodfellow et al., 2015) as an attack demonstrates that this is often the case.
When gradients succeed. While saliency maps may not be the best approach in general for capturing information about nearby segments of the model’s decision boundary, there are cases in which it serves as a good approximation. Recent work has proposed using the Lipschitz continuity of an attribution method to characterize the difference between the attributions of an input x and its neighbors within a `p ball neighborhood (Def. 4) (Wang et al., 2020c). This naturally leads to Proposition 1, which states that the difference between the saliency map at an input and the correct normal to the closest boundary segment is bounded by the distance to that segment.
Definition 4 (Attribution Robustness) An attribution method g(x) is (λ, δ)-locally robust at the evaluated point x if ∀x′ ∈ B(x, δ), ||g(x′)− g(x)|| ≤ λ||x′ − x||.
Proposition 1 Suppose that f has a (λ, δ)-robust saliency map gS at x, x′ is the closest point on the closest decision boundary segment to x and ||x′ − x|| ≤ δ, and that n is the normal vector of that boundary segment. Then ||n− gS(x)|| ≤ λ||x− x′||.
Proposition 1 therefore provides the following insight: for networks that admit robust attributions (Chen et al., 2019; Wang et al., 2020c), the saliency map is a good approximation to the boundary vector. As prior work has demonstrated the close correspondence between robust prediction and robust attributions (Wang et al., 2020c; Chalasani et al., 2020), this in turn suggests that explanations on robust models will more closely resemble boundary normals.
As training robust models can be expensive, and may not come with guarantees of robustness, post-processing techniques like randomized smoothing (Cohen et al., 2019), have been proposed as an alternative. Dombrowski et al. (2019) noted that models with softplus activations (y = 1/β log(1+exp (βx))) approximate smoothing, and in fact give an exact correspondence for singlelayer networks. Combining these insights, we arrive at Theorem 1, which suggests that the saliency map on a smoothed model approximates the closest boundary normal vector well; the similarity is inversely proportional to the standard deviation of the noise used to smooth the model.
Theorem 1 Let m(x) = ReLU(Wx) be a one-layer network and when using randomized smoothing, we writemσ(x). Let g(x) be the SM formσ(x) and suppose ∀x′′ ∈ B(x, ||x−x′||), ||g(x′′)|| ≥ c where x′ is the closest adversarial example, we have the following statement holds: ||g(x) − g(x′)|| ≤ λ where λ is monotonically decreasing w.r.t σ.
Theorem 1 suggests that when randomized smoothing is used, the normal vector of the closest decision boundary segment and the saliency map are similar, and this similarity increases with the smoothness of the model’s boundaries. We think the analytical form for deeper networks exists but its expression might be unnecessarily complex due that we need to recursively apply ReLU before computing the integral (i.e., the expectation). The analytical result above for one-layer network and empirical validations for deeper nets in Figure 11, if taken together, shows that attributions and boundary-based attributions are more similar in a smoothed model.
4 BOUNDARY-BASED ATTRIBUTION
Without the properties introduced by robust learning or randomized smoothing, the local gradient, i.e. saliency map, may not be a good approximation of decision boundaries. In this section, we build on the insights of our analysis to present a set of novel attribution methods that explicitly incorporate the normal vectors of nearby boundary segments. Importantly, these attribution methods can be applied to models that are not necessarily robust, to derive explanations that capture many of the beneficial properties of explanations for robust models.
Using the normal vector of the closest decision boundary to explain a classifier naturally leads to Definition 5, which defines attributions directly from the normal of the closest decision boundary.
Definition 5 (Boundary-based Saliency Map (BSM)) Given f and an input x, we define Boundary-based Saliency MapBS(x) as follows: BS(x) def = ∂fc(x
′)/∂x′, where x′ is the closest adversarial example to x, i.e. c = F (x) 6= F (x′) and ∀xm.||xm−x|| < ||x′−x|| → F (x) = F (xm).
Incorporating More Boundaries. The main limitation of using Definition 5 as a local explanation is obvious: the closest decision boundary only captures one segment of the entire decision surface. Even in a small network, there will be numerous boundary segments in the vicinity of a relevant point. Taking inspiration from Integrated Gradients, Definition 6 proposes the Boundary-based Integrated Gradient (BIG) by aggregating the attributions along a line between the input and its closest boundary segment.
Definition 6 (Boundary-based Integrated Gradient(BIG)) Given f , Integrated Gradient gIG and an input x, we define Boundary-based Integrated Gradient BS(x) as follows: BIG(x) := gIG(x;x′), where x is the nearest adversarial example to x, i.e. c = F (x) 6= F (x′) and ∀xm.||xm − x|| < ||x′ − x|| → F (x) = F (xm).
Geometric View of BIG. BIG explores a linear path from the boundary point to the target point. Because points on this path are likely to traverse different activation polytopes, the gradient of intermediate points used to compute gIG are normals of linear extensions of their local boundaries. As the input gradient is identical within a polytope Pi, the aggregate computed by BIG sums each gradient wi along the path and weights it by the length of the path segment intersecting with Pi. In other words, one may view IG as an exploration of the model’s global geometry that aggregates all boundaries from a fixed reference point, whereas BIG explores the local geometry around x. In the former case, the global exploration may reflect boundaries that are not particularly relevant to model’s observed behavior at a point, whereas the locality of BIG may aggregate boundaries that are more closely related (a visualization is shown in Fig. 1a).
Finding nearby boundaries. Finding the exact closest boundary segment is identical to the problem of certifying local robustness (Fromherz et al., 2021; Jordan et al., 2019; Kolter & Wong, 2018; Lee et al., 2020; Leino et al., 2021b; Tjeng et al., 2019; Weng et al., 2018), which is NP-hard for piecewise-linear models (Sinha et al., 2020). To efficiently find an approximation of the closest boundary segment, we leverage and ensemble techniques for generating adversarial examples, i.e. PGD (Madry et al., 2018), AutoPGD (Croce & Hein, 2020) and CW (Carlini & Wagner, 2017), and use the closest one found given a time budget. The details of our implementation are discussed in Section 5, where we show that this yields good results in practice.
5 EVALUATION
In this section, we first validate that the attribution vectors are more aligned to normal vectors of nearby boundaries in robust models(Fig. 3a). We secondly show that boundary-based attributions provide more “accurate” explanations – attributions highlight features that are actually relevant to the label – both visually (Fig. 4 and 5) and quantitatively (Table 1). Finally, we show that in a standard model, whenever attributions more align with the boundary attributions, they are more “accurate”.
General Setup. We conduct experiments over two data distributions, ImageNet (Russakovsky et al., 2015) and CIFAR-10 (Krizhevsky et al.). For ImageNet, we choose 1500 correctly-classified images from ImageNette (Howard), a subset of ImageNet, with bounding box area less than 80% of the original source image. For CIFAR-10, We use 5000 correctly-classified images. All standard and robust deep classifiers are ResNet50. All weights are pretrained and publicly available (Engstrom et al., 2019). Implementation details of the boundary search (by ensembling the results of PGD, CW and AutoPGD) and the hyperparameters used in our experiments, are included in Appendix B.2.
5.1 ROBUSTNESS→ BOUNDARY ALIGNMENT
In this subsection, we show that SM and IG better align with the normal vectors of the decision boundaries in robust models. For SM, we use BSM as the normal vectors of the nearest decision boundaries and measure the alignment by the `2 distance between SM and BSM following Proposition 1. For IG, we use BIG as the aggregated normal vectors of all nearby boundaries because
IG also incorporates more boundary vectors. Recently, Pan et al. (2021) also provides Adversarial Gradient Integral (AGI) as an alternative way of incorporating the boundary normal vectors into IG. We first use both BIG and AGI to measure how well IG aligns with boundary normals and later compare them in Sec. 5.2, followed by a formal discussion in Sec. 7.
Aggregated results for standard models and robust models are shown in Fig. 3a. It shows that adversarial training with bigger encourages a smaller difference between attributions and their boundary variants. Particularly, using `2 norm and setting = 3.0 are most effective for ImageNet compared to `∞ norm bound. One possible explanation is that the `2 space is special because training with `∞ bound may encourage the gradient to be more Lipschitz in `1 because of the duality between the Lipschitzness and the gradient norm, whereas `2 is its own dual.
5.2 BOUNDARY ATTRIBUTION→ BETTER LOCALIZATION
In this subsection, we show boundary attributions (BSM, BIG and AGI) better localize relevant features. Besides SM, IG and SG, we also focus on other baseline methods including Grad × Input (GTI) (Simonyan et al., 2013) and DeepLIFT (rescale rule only) (Shrikumar et al., 2017) that are reported to be more faithful than other related methods (Adebayo et al., 2018; 2020).
In an image classification task where ground-truth bounding boxes are given, we consider features within a bounding box as more relevant to the label assigned to the image. Our evaluation is performed over ImageNet only because no bounding box is provided for CIFAR-10 data. The metrics used for our evaluation are: 1) Localization (Loc.) (Chattopadhyay et al., 2017) evaluates the intersection of areas with the bounding box and pixels with positive attributions; 2) Energy Game (EG) (Wang et al., 2020a) instead computes the portion of attribute scores within the bounding box. While these two metrics are common in the literature, we propose the following additional metrics: 3)Positive Percentage (PP) evaluates the portion of positive attributions in the bounding box because a naive assumption is all features within bounding boxes are relevant to the label (we will revisit this assumption in Sec. 6); and 4) Concentration (Con.) sums the absolute value of attribution scores over the distance between the “mass” center of attributions and each pixel within the bounding box. Higher Loc., EG, PP and Con. are better results. We provide formal details for the above metrics in Appendix B.1.
We show the average scores for ResNet50 models in Table 1 where the corresponding boxplots can be found in Appendix B.4. BIG is noticeably better than other methods on Loc. EG, PP and Con. scores for both robust and standard models and matches the performance of SG on EG for a standard model. Notice that BSM is not significantly better than others in a standard model, which confirms our motivation of BIG – that we need to incorporate more nearby boundaries because a single boundary may not be sufficient to capture the relevant features.
We also measure the correlation between the alignment of SM and BSM with boundary normals and the localization abilities, respectively. For SM, we use BSM to represent the normal vectors of the boundary. For IG, we use AGI and BIG. For each pair X-Y in {SM-BSM, IG-AGI, IG-BIG}, we measure the empirical correlation coefficient between −||X− Y ||2 and the localization scores of X in a standard ResNet50 and the result is shown in Fig. 3b. Our results suggest that when the attribution methods better align with their boundary variants, they can better localize the relevant features in terms of the Loc. and EG. However, PP and Con. have weak and even negative correlations. One possible explanation is that the high PP and Con. of BIG and AGI compared to IG (as shown in Table 1) may also come from the choice of the reference points. Namely, compared to a zero vector, a reference point on the decision boundary may better filter out noisy features.
We end our evaluations by visually comparing the proposed method, BIG, against all other attribution methods for the standard ResNet50 in Fig. 4 and for the robust ResNet50 in Fig. 5, which demonstrates that BIG can easily and efficiently localize features that are relevant to the prediction. More visualizaitons can be found in the Appendix E.
Summary. Taken together, we close the loop and empirical show that standard attributions in robust models are visually more interpretable because they better capture the nearby decision boundaries. Therefore, the final take-away from our analytical and empirical results is if more resources are devoted to training robust models, effectively identical explanations can be obtained using much less costly standard gradient-based methods, i.e. IG.
6 DISCUSSION
Baseline Sensitivity. It is natural to treat that BIG frees users from the baseline selection in explaining non-linear classifiers. Empirical evidence has shown that IG is sensitive to the baseline inputs (Sturmfels et al., 2020). We compare BIG with IG when using different baseline inputs, white or black images. We show an example in Fig 6b. For the first two images, when using the baseline input as the opposite color of the dog, more pixels on dogs receive non-zero attribution scores. Whereas backgrounds always receive more attribution scores when the baseline input has the same color as the dog. This is because gIG(x)i ∝ (x− xb)i (see Def. 2) that greater differences in the input feature and the baseline feature can lead to high attribution scores. The third example further questions the readers using different baselines in IG whether the network is using the white dog to predict Labrador retriever. We demonstrate that conflicts in IG caused by the sensitivity to the baseline selection can be resolved by BIG. BIG shows that black dog in the last row is more important for predicting Labrador retriever and this conclusion is further validated by our counterfactual experiment in Appendix D. Overall, the above discussion highlights that BIG is significantly better than IG in reducing the non-necessary sensitivity in the baseline selection.
Limitations. We identify two limitations of the work. 1) Bounding boxes are not perfect groundtruth knowledge for attributions. In fact, we find a lot of examples where the bounding boxes either fail to capture all relevant objects or are too big to capture relevant features only. Fixing mislabeled bounding boxes still remain an open question and should benefit more expandability research in general. 2) Our analysis only targets on attributions that are based on end-to-end gradient computations. That is, we are not able to directly characterize the behavior of perturbation-based approaches, i.e. Mask (Fong & Vedaldi, 2017), and activation-based approaches, i.e. GradCAM (Selvaraju et al., 2017) and Feature Visualization (Olah et al., 2017).
7 RELATED WORK
Ilyas et al. (2019) shows an alternative way of explaining why robust models are more interpretable by showing robust models usually learn robust and relevant features, whereas our work serves as a geometrical explanation to the same empirical findings in using attributions to explain deep models. Our analysis suggests we need to capture decision boundaries in order to better explain classifiers,
whereas a similar line of work, AGI (Pan et al., 2021) that also involves computations of adversarial examples is motivated to find a non-linear path that is linear in the representation space instead of the input space compared to IG. Therefore, AGI uses PGD to find the adversarial example and aggregates gradients on the non-linear path generated by the PGD search. We notice that the trajectory of PGD search is usually extremely non-linear, complex and does not guarantee to return closer adversarial examples without CW or AutoPGD (see comparisons between boundary search approaches in Table B.2). We understand that finding the exact closest decision boundary is not feasible, but our empirical results suggest that the linear path (BIG) returns visually sharp and quantitative better results in localizing relevant features. Besides, a non-linear path should cause AGI fail to meet the symmetry axiom (Sundararajan et al., 2017) (see Appendix C for an example of the importance of symmetry for attributions). We further summarize the commons and differences in Table 6a.
In the evaluation of the proposed methods, we choose metrics related to bounding box over other metrics because for classification we are interested in whether the network associate relevant features with the label while other metrics (Adebayo et al., 2018; Ancona et al., 2017; Samek et al., 2016; Wang et al., 2020b; Yeh et al., 2019), e.g. infidelity (Yeh et al., 2019), mainly evaluates whether output scores are faithfully attributed to each feature. Our idea of incorporating boundaries into explanations may generalize to other score attribution methods, e.g. Distributional Influence (Leino et al., 2018) and DeepLIFT (Shrikumar et al., 2017). The idea of using boundaries in the explanation has also been explored by T-CAV (Kim et al., 2018), where a linear decision boundary is learned for the internal activations and associated with their proposed notion of concept.
When viewing our work as using nearby boundaries as a way of exploring the local geometry of the model’s output surface, a related line of work is NeighborhoodSHAP (Ghalebikesabi et al., 2021), a local version of SHAP (Lundberg & Lee, 2017). When viewing our as a different use of adversarial examples, some other work focuses on counterfactual examples (semantically meaningful adversarial examples) on the data manifold (Chang et al., 2019; Dhurandhar et al., 2018; Goyal et al., 2019).
8 CONCLUSION
In summary, we rethink the target question an explanation should answer for a classification task, the important features that the classifier uses to place the input into a specific side of the decision boundary. We find the answer to our question relates to the normal vectors of decision boundaries in the neighborhood and propose BSM and BIG as boundary attribution approaches. Empirical evaluations on STOA classifiers validate that our approaches provide more concentrated, sharper and more accurate explanations than existing approaches. Our idea of leveraging boundaries to explain classifiers connects explanations with the adversarial robustness and help to encourage the community to improve model quality for explanation quality.
A THEOREMS AND PROOFS
A.1 PROOF OF PROPOSITION 1
Proposition 1 Suppose that f has a (λ, δ)-robust saliency map gS at x, x′ is the closest point on the closest decision boundary segment to x and ||x′ − x|| ≤ δ, and that n is the normal vector of that boundary segment. Then ||n− gS(x)|| ≤ λ||x− x′||. To compute n can be efficiently computed by taking the derivatice of the model’s output w.r.t to the point that is on the decision boundary such that n = ∂f(x
′) ∂x′ and ∀xm ∈ R
d, F (xm) = F (x) if ||xm − x|| ≤ ||x′ − x||. Because we assume ||x − x′|| ≤ δ, and the model has (λ, δ)-robust Saliency Map, then by Def. 4 we have
||n− gS(x)|| ≤ λ||x− x′||
A.2 PROOF OF THEOREM 1
Theorem 1 Let m(x) = ReLU(Wx) be a one-layer network and when using randomized smoothing, we write mσ(x). Let g(x) be the SM for mσ(x) and suppose ∀x′′ ∈ B(x, ||x − x′||), ||g(x′′)|| ≥ c where x′ is the closest adversarial example, we have the following statement holds: ||g(x)− g(x′)|| ≤ λ where λ is monotonically decreasing w.r.t σ.
Proof:
We begin our proof by firstly introducing Randomized Smoothing.
Definition 7 (Randomized Smoothing (Cohen et al., 2019)) Suppose F (x) = argmaxc fc(x), the smoothed classifier G(x) is defined as
G(x) := argmax c Pr [F (x+ ) = c] (1)
where ∼ N (0, σ2I)
Now the rest of the proof of is three-fold: 1) firstly we will show that there exist a non-linear activation function Er(x) such that the output of the smoothed ReLU network mσ(x) is equivalent when replacing the ReLU activation with Er activation; 2) secondly derive the difference between the saliency map of the network with Er activation; and 3) lastly, we show that the difference between SM and BSM of the network with Er activation is bounded, which is inversely proportional to the standard deviation used to create the smoothed ReLU network mσ(x).
(1) Step I: Error activation (Er) function and randomized smoothing. 1
Randomized Smoothing creates a smoothed model that returns whichever the label that the base classifier most likely to return under the perturbation generated by the Gaussian noise. Now we take a look at the output of each class under the Gaussian noise. Consider yi is the output of the i-th class of the network ReLU(Wx), that is
yi = E ∼N (0,σ2I)ReLU(w>i (x+ )) (2)
To simplify the notation, we denote E ∼N (0,σ2I) as E. We expand Equation (2):
yi = E [ ReLU(w>i x+w > i ) ] = E [ReLU(u+ ′)] (3)
where we denote u = w>i x and ′ = w>i . u is a scalar and ′ follows a zero-centered univariate Gaussian with the standard deviation s ∝ σ because the dot production between the constant weight vector wi and the random vector can be seen as a linear combination of each dimension of and the covariance between each dimension of is 0 for the Gaussian noise used for randomized smoothing
1We appreciate the discussion with the author Pan Kessel of Dombrowski et al. (2019) for the derivations from Equation (6) to (7)
in the literature (Cohen et al., 2019). By expanding the expectation symbol to its integral form, we obtain:
yi = 1
s √ 2π ∫ ∞ −∞ exp(− ′2 2s2 )ReLU(u+ ′)d ′ (4)
Let τ = u+ ′ and notice that ReLU(τ) = 0 if τ < 0, the equation above can be rewritten as:
yi = 1
s √ 2π ∫ ∞ 0 exp(− (τ − u) 2 2s2 )τdτ (5)
= 1√ 2π exp(− u 2 2s2 )s+ u 2
[ 1 + Erf(
u√ 2s )
] (6)
(7)
where Erf is the error function defined as Erf(x) = 2√ π ∫ x 0 exp(−t2)dt. We therefore define an Er activation for an input u with the standard deviation s as
Er(u; s) = 1√ 2π exp(− u 2 2s2 )s+ u 2
[ 1 + Erf(
u√ 2s )
] (8)
and we show that
yi = E ∼N (0,σ2I) [ ReLU(w>i (x+ )) ] = Er(w>i x; s(σ)) (9)
That is, to analyze the gradient of the output for a smoothed model w.r.t the input, we can alternatively analyze the gradient of an equivalent Er network. We plot three examples of the Er activations in Fig. 7 for the readers to see what does the function look like.
2) Step II: the Saliency Map for an Er network.
By the definition of Saliency map (Def. 1) and the chain rule, we have:
SM(x) = ∂yi ∂x = ∂yi ∂u ∂u ∂x (Let u = w>i x) (10)
= ∂
∂u (Er(u; s)) ·wi (11)
= 1
2
[ 1 + Erf(
u√ 2s )
] ·wi (12)
The transition between Equation (11) to (12) is based on the fact that the derivative of Erf(x) is 2√ π exp(−x2).
3) Step III: the difference between SM and BSM for an Er network.
Let x′ be the closest point on the decision boundary for the smoothed classifiermσ and ||x−x′|| = r (for the closed-form expression of r, see Cohen et al. (2019)). Based on the definition of BSM, we have
BSM(x) = ∂yi(x
′)
∂x′ =
1
2
[ 1 + Erf(
u′√ 2s )
] ·wi, u′ = w>i x′ (13)
The difference between SM and BSM therefore is computed as
||BSM(x)− SM(x)|| = ||1 2
[ 1 + Erf(
u′√ 2s )
] ·wi − 1
2
[ 1 + Erf(
u√ 2s )
] ·wi|| (14)
= 1 2 |Erf( u ′ √ 2s )− Erf( u√ 2s )| · ||wi|| (15)
≤ 1 2
[ |Erf( u
′ √ 2s )|+ |Erf( u√ 2s
)| ] · ||wi|| (Triangle Inequality) (16)
We notice that the u′ is bounded because u′ = w>i x ′ ≤ ||wi|| · ||x′|| ≤ ||wi|| · (||wi|| + r) and similarly for u such that u = w>i x ≤ ||wi|| · (||x||+ r). Because Erf function is increasing w.r.t the input and s > 0, we arrive at the following inequality:
||BSM(x)− SM(x)|| ≤ λ (17)
where
λ = Erf( ||wi|| · (||x||+ r)√
2s ) · ||wi|| (18)
We take the absolute symbol out because the output of an Erf is positive when its input is positive. Now, given that ||wi||, r and ||x|| are constants when for a given input x, the upper-bound Erf( ||wi||·(||x||+r)√
2s ) · ||wi|| is monotonically increasing as s decreases. From the Step I, we know
that s ∝ σ, therefore we prove there exist an upper-bound λ of the difference between the SM and BSM for a smoothed classifier and λ is monotonically decreasing w.r.t the standard deviation of the Gaussian noise.
B EXPERIMENT DETAILS AND ADDITIONAL RESULTS
B.1 METRICS WITH BOUNDING BOXES
We will use the following extra notations in this section. Let X , Z and U be a set of indices of all pixels, a set of indices of pixels with positive attributions, and a set of indices of pixels inside the bounding box for a target attribution map g(x). We denote the cardinality of a set S as |S|.
Localization (Loc.) (Chattopadhyay et al., 2017) evaluates the intersection of areas with the bounding box and pixels with positive attributions.
Definition 8 (Localization) For a given attribution map g(x), the localization score (Loc.) is defined as
Loc := |Z ∩ U |
|U |+ |Z ∩ (X \ U)| (19)
Energy Game (EG) (Wang et al., 2020a) instead evaluates computes the portion of attribute scores within the bounding box.
Definition 9 (Energy Game) For a given attribution map g(x), the energy game EG is defined as
EG := ∑ i∈Z∩U g(x)i∑
i∈X max(g(x)i, 0) (20)
Positive Percentage (PP) evaluates the sum of positive attribute scores over the total (absolute value of) attribute scores within the bounding box.
Definition 10 (Positive Percentage) Let V be a set of indices pf all pixels with negative attribution scores, for a given attribution map g(x), the positive percentage PP is defined as
PP := ∑ i∈Z∩U g(x)i∑
i∈Z∩U g(x)i − ∑ i∈V ∩U g(x)i
(21)
Concentration (Con.) evaluates the sum of weighted distances by the “mass” between the “mass” center of attributions and each pixel within the bounding box. Notice that the computation of cx and cy can be computed with scipy.ndimage.center of mass. This definition encourages that pixels with high absolute value of attribution scores to be closer to the mass center.
Definition 11 (Concentration) For a given attribution map g(x), the concentration Con. is defined as follws
Con. := ∑ i∈U ĝ(x)i/ √ (ix − cx)2 + (iy − cy)2 (22)
where ĝ is the normalized attribution map so that ĝi = gi/ ∑ i∈U |gi|. ix, iy are the coordinates of the pixel and
cx = ∑ i∈U ixĝ(x)i∑ i∈U ĝ(x)i , cy = ∑ i∈U iy ĝ(x)i∑ i∈U ĝ(x)i
(23)
Besides metrics related to bounding boxes, there are other metrics in the literature used to evaluate attribution methods (Adebayo et al., 2018; Ancona et al., 2017; Samek et al., 2016; Wang et al., 2020b; Yeh et al., 2019). We focus on metrics that use provided bounding boxes, as we believe that they offer a clear distinction between likely relevant features and irrelevant ones.
B.2 IMPLEMENTING BOUNDARY SEARCH
Our boundary search uses a pipeline of PGDs, CW and AutoPGD. Adversarial examples returned by each method are compared with others and closer ones are returned. If an adversarial example is not found, the pipeline will return the point from the last iteration of the first method (PGDs in our case). Hyper-parameters for each attack can be found in Table 2. The implementation of PGDs and CW are based on Foolbox (Rauber et al., 2020; 2017) and the implementation of AutoPGD is based
on the authors’ public repository2 (we only use apgd-ce and apgd-dlr losses for efficiency reasons). All computations are done using a GPU accelerator Titan RTX with a memory size of 24 GB. Comparisons on the results of the ensemble of these three approaches are shown in Fig. 10a.
B.3 HYPER-PARAMETERS FOR ATTRIBUTION METHODS
All attributions are implemented with Captum (Kokhlikyan et al., 2020) and visualized with Trulens (Leino et al., 2021a). For BIG and IG, we use 20 intermediate points between the baseline and the input and the interpolation method is set to riemann trapezoid. For AGI, we base on the authors’ public repository3. The choice of hyper-paramters follow the default choice from the authors for ImageNet and we make minimal changes to adapt them to CIFAR-10 (see Fig. 10b).
To visualize the attribution map, we use the HeatmapVisualizer with blur=10, normalization type="signed max" and default values for other keyword arguments from Trulens.
B.4 DETAILED RESULTS ON LOCALIZATION METRICS
We show the average scores for each localizaiton metrics in Sec. 5. We also show the boxplots of the scores for each localization metrics in Fig. 8 for the standard ResNet50 model and Fig. 9 for the robust ResNet50 (`2|3.0). All higher scores are better results.
2https://github.com/fra31/auto-attack 3https://github.com/pd90506/AGI
B.5 ADDITIONAL COMPARISON WITH AGI
We additionally compare the localization ability of relevant features between BIG and AGI if we only use PGDs to return closest boundary points, that is we recursively increase the norm bound and perform PGD attack until the first time we succeed to find an adversarial point. We denote this approach as BIGp. Note that BIGp is still different from AGI by the type of the path, i.e. lines
and curves, over which the integral is performed. That is AGI also aggregates the path integral starting from a set of adversarial points found by the targeted PGD attack, where BIGp starts from the adversarial pointed returned by untargeted PGD attack. We use the same parameters for both PGD and AGI from Fig. 2 and we run the experiments over the same dataset used in Sec. 5.1. For reference, we also include the results of IG. The results are shown in Table. 3. We notice that after removing CW and AutoPGD, BIGp actually performs better than AGI, and even slightly better than BIG for the robust model. One reason to explain the tiny improvement from BIGp might be that for a robust network, the gradient at each iteration of the PGD attack is more informative and less noisy compared to a standard model so that the attack can better approximate the closest decision boundary. The results in Table. 3 therefore demonstrates that BIG and BIGp are able to localize relevant features better than AGI.
B.6 ADDITIONAL LOCALIZATION METRIC
Besides the localization metrics used in Sec. 5.1, we discuss an additional localization metric frequently used for evaluating attention and CAM-based explanations: Top1-Loc Choe & Shim (2019); Aggarwal et al. (2020). Top1-Loc is calculated as follows: an instance is considered as Top1-Loc correct given an attribution if 1) the prediction is Top1-correct; and 2) GT-Loc correct – namely, the IoU of the ground-truth bounding box and area highlighted by the attribution is more than 50 %. When only using the images that are Top1-correct, Top1-Loc reduces to GT-Loc. Top1-Loc is different from other localization metrics used for evaluating attribution methods because it takes the prediction behavior of the target model into the account, which in general is not an axiom when motivating a gradient-based attribution method. In the previous evaluations, we are only interested in images that the model makes correct Top1 predictions, in this section we will use the same images that are true-positives. In this case, Top1-Loc accuracy reduces to GT-Loc accuracy, and so we measure the GT-Loc directly. To determine the which part of the image is highlighted by the attribution, we compute a threshold for each attribution map and a pixel is considered within the highlight region if and only if the attribution score is higher than the threshold. For a given attribution map, we consider a threshold value t as the q-th percentile for the absolute values of attribution scores. We plot the GT-Loc accuracy against q in Fig. 13. We notice that attention-based and CAM-based attributions usually produce a cloud-like visualization because of the blurring technique or upsample layers used to compute the results. To ensure GT-Loc will work from gradient-based attributions we are interested in this paper, we also include results (Fig. 14) where we apply a Gaussian Blur (σ = 3.0) to the attribution map first before calculating the GT-Loc accuracy. The results are aggreated over 1500 images from ImageNette on a standard ResNet50 and a robust ResNet50, respectively. Higher GT-Loc scores are better.
Behavior of BIG. The results in Fig. 13 and 14 show that BIG is better than all other attributions on standard models excluding SG and uniformly better including SG on a robust model. Before we provide some explanations about the behaviors of SG (green curves) on standard models in the next paragraph, we also observe that: 1) blurring only changes the GT-Loc scores but not the overral rankings across attributions; 2) a threshold corresponding to a percentile near 40% provides the best GT-Loc scores for all methods; 3) gradient-based attributions generally provide worse GT-Loc (or Top1-Loc) scores compared to CAM-based and attention-based approaches in the literature Choe &
Shim (2019); Aggarwal et al. (2020), which is not surprising because gradient-based approaches are usually axiomatically-justified to be faithful to the model. Therefore, it is expected that the model will more or less learn spurious features from the input, which makes the gradient-based attributions noisy than attention and CAM-based ones. Therefore, when localizing relevant features, users may want to consult activation-based approaches, i.e. CAMs, but when debugging and ensuring the network learns less spurious and irrelevant features, users should instead use gradient-based approaches because of the axioms behind these approaches.
Behavior of SG in Standard Models. SG is uniformly better than all other approaches in terms of the Gt-Loc accuracies on a standard model, which is surprising but not totally unexpected. We beleive the reason behind this result is that, SG is actually the gradient from a smoothed counterpart of the standard model (see discussions near Theorem 1), which is more robust. Therefore, it does not seem to be an apple-to-apple comparison between SG and other approaches because it can be less faithful to the standard model – namely SG is more faithful to the smoothed classifier. That is very likely why SG is worse than BIG in Fig. 13b and 14b when the smoothing technique becomes marginal for improving the robustness for a model that has already been robustly trained.
B.7 SANITY CHECK FOR BIG
We perform Sanity Checks for BIG using Rank Order Correlations between the absolute values of BIGs when randomizing the weights from the top layer to the bottom (Adebayo et al., 2018). To ensure the output of the model does not become NaN, when randomizing the weights of each trainable layer, we ensure that we replace the weight matrix with a random matrix with the same norm as follows.
1 def _randomized_models(): 2 all_parameters = [] 3 for param in model.parameters(): 4 all_parameters.append(param) 5 for step, param in enumerate(all_parameters[::-1]): 6 random_w = torch.randn_like(param) 7 ## we make sure the randomized weights have the same norm to prevent the network to output nan results 8 param.data = torch.nn.parameter.Parameter( 9 random_w * torch.norm(param.data) / torch.norm(random_w.data))
10 if step % num_blocks == 0 or step == len(all_parameters): 11 yield model 12
For each iteration, we continuously replace randomized 5 layers in the reversed sequence returned by model.parameters() and the results are plotted in Fig. 15. We consider BIG passes the sanity check as the results are similar compared with the top row of Fig 4 in Adebayo et al. (2018).
B.8 ADDITIONAL EXPERIMENT WITH SMOOTHED GRADIENT
Theorem 1 demonstrates that for a one-layer network, as we increase the standard deviation σ of the Gaussian distribution used for creating the smoothed model mσ (Cohen et al., 2019), the difference between the saliency map and the boundary-based saliency map computed in mσ is bounded by a constant λ, which is monotonically decreasing w.r.t σ. That is, greater σ will produce a smoothed model, where the saliency map (SM) explanation of mσ is a good approximation for the boundarybased saliency map (BSM). However, as the depth of the deep network increases, a closed-form analysis may be difficult to derive. Therefore, in this section, we aim to empirically validate that the take-away from Theorem 1 should generalize to deeper networks.
Computing SM for mσ . One practical issue to compute any gradient-related explanations for the smoothed model mσ is that mσ is defined in an integral form, which can not be directly built with tf.keras. However, Theorem 2 shows that the smoothed gradient of the original model m is equivalent to the saliency map of the smoothed model mσ . Namely, the order of smoothing and integral is exchangeable when computing the gradient.
Theorem 2 (Proposition 1 from Wang et al. (2020c)) Suppose a model f(x) satisfies max |f(x)| <∞. For Smoothed Gradient gSG(x), we have
gSG(x) = ∂(f ~ q)(x)
∂x (24)
where q(x) = N (0, σ2I) and ~ denotes the convolution operation.
Computing BSM for mσ . Another practical issue is computing the decision boundary for a smoothed modelmσ is not defined in a deterministic way as randomized smoothing provides a probabilistic guarantee. In this paper, we do the following steps to approximate the decision boundary of a smoothed model. To generate the adversarial examples for the smoothed classifier of ResNet50 with randomized smoothing, we need to compute back-propagation through the noises. The noise sampler is usually not accessible to the attacker who wants to fool a model with randomized smoothing. However, our goal in this section is not to reproduce the attack with similar setup in practice, instead, what we are after is the point on the boundary. We therefore do the noise sampling prior to run PGD attack, and we use the same noise across all the instances. The steps are listed as follows:
1. We use numpy.random.randn as the sampler for Gaussian noise with its random seed set to 2020. We use 50 random noises per instance.
2. In PGD attack, we aggregate the gradients of all 50 random inputs before we take a regular step to update the input.
3. We set = 3.0 and we run at most 40 iterations with a step size of 2 ∗ /40.
4. The early stop criteria for the loop of PGD is that when less than 10% of all randomized points have the original prediction.
5. When computing Smooth Gradient for the original points or for the adversarial points, we use the same random noise that we generated to approximate the smoothed classifier.
Results. We run the experiment with 500 images from ImageNet on ResNet50 as this computation is significantly more expensive than previous experiments. We compute the `2 distances between SM and BSM obtained from the steps above for several values as shown in Fig. 11. Notably, the trend of the log difference against the standard deviation σ used for the Gaussian noise validates that the qualitative meaning of Theorem 1 holds even for large networks. That is, when the model becomes more smoothed, saliency map explanation is a good approximation for the boundary-based saliency map.
C SYMMETRY OF ATTRIBUTION METHODS
Sundararajan et al. (2017) prove that a linear path is the only path integral that satisifes symmetry; that is, when two features’ orders are changed for a network that is not using any order information from the input, their attribution scores should not change. One simple way to show the importance of symmetry by the following example and we refer Sundararajan et al. (2017) to readers for more analysis.
Example 1 Consider a function f(x, y) = min(x, y) and to attribute the output of f to the inputs at x = 1, y = 1 we consider a baseline x = 0, y = 0. An example non-linear path from the baseline to the input can be (x = 0, y = 0) → (x = 1, y = 0) → (x = 1, y = 1). On this path, f(x, y) = min(x, y) = y after the point (x = 1, y = 0); therefore, gradient integral will return 0 for the attribution score of x and 1 for y (we ignore the infinitesimal part of (x = 0, y = 0)→ (x = 1, y = 0)). Similarly, when choosing a path (x = 0, y = 0) → (x = 0, y = 1) → (x = 1, y = 1), we find x is more important. Only the linear path will return 1 for both variables in this case.
D COUNTERFACTUAL ANALYSIS IN THE BASELINE SELECTION
The discussion in Sec. 6 shows an example where there are two dogs in the image. IG with black baseline shows that the body of the white dog is also useful to the model to predict its label and the black dog is a mix: part of the black dog has positive attributions and the rest is negatively contribute to the prediction. However, our proposed method BIG clearly shows that the most important part is the black dog and then comes to the white dog. To validate where the model is actually using the white dog, we manually remove the black dog or the white dog from the image and see if the model retain its prediction. The result is shown in Fig. 12. Clearly, when removing the black dog, the model changes its prediction from Labrador retriever to English foxhound while removing the white dog does not change the prediction. This result helps to convince the reader that
BIG is more reliable than IG with black baseline in this case as a more faithful explanation to the classification result for this instance.
E ADDITIONAL VISUALIZATIONS FOR BIG
More visualizations comparing BIG with other attributions can be found in Fig. 16 and 17. We show several examples in Fig. 18 when there are more than one objects in the input and we explain the model’s Top1 prediction, where we show that BIG is able to localize the objects that are actually relevant to the predicted label. | 1. What are the weaknesses of vanilla gradients in terms of pointing towards decision boundaries?
2. How does the paper provide insights into the smoothing of one-layer ReLU networks?
3. What is the main contribution of the paper regarding boundary alignment and object localization?
4. What are the limitations of Theorem 1, and how does it only hold for one-layer ReLU networks?
5. How can the evaluation of normality to the decision boundary be improved?
6. What are some minor comments and typos in the paper?
7. How does the paper's focus lead to a limitation in its application to perturbation attributions?
8. What is the significance of the fact that the paper does not require a human-subject evaluation, and what effect size might we expect from using the proposed method? | Summary Of The Paper
Review | Summary Of The Paper
The paper focuses on the intersection of gradient attribution and adversarial robustness. First, it analyzes the weaknesses of vanilla gradients: the gradient does not have to point towards the decision boundary of an n-layer ReLU network. Then the paper provides some insights into the smoothing of one-layer ReLU networks (Theorem 1). Finally, a boundary-based saliency map and an extension of integrated gradients are proposed and evaluated in terms of boundary alignment and object localization.
Review
The paper has an interesting topic: adding theoretical insights to explainability methods. The paper does especially well on providing a good intuition about the relationships of normals, polytopes and decision boundary (content of 3.1 and first part of 3.2). I also found the paper overall well written (some minor typos and duplicates are listed below). The paper's story of first analyzing the limitations of gradients, fixing the errors, and then evaluating the methods is also good. I address my concerns about the generality and rigor of theorem 1, the evaluation, and the limitations below.
Proof of theorem 1
Theorem 1 contains an
⪅
sign. After checking the appendix, it turns out that the proof is only correct for the case that
(Dombrowski et al., 2019) points out that the random distribution
p
β
(
ϵ
i
)
=
β
(
e
x
p
(
β
ϵ
i
/
2
)
+
e
x
p
(
β
ϵ
i
/
2
)
)
2
closely resembles a normal distribution with a standard deviation
σ
=
log
2
2
π
β
.
However, under which conditions does it resemble a normal distribution? (Dombrowski et al., 2019) only made this comment to explain a possible connection to SmoothGrad (see page 8 in Dombrowski et al., 2019). No concrete conditions are given on when or how close the distributions matches. I did not even found how
σ
was derived in (Dombrowski et al., 2019) (if you know where, please point me to it). I did a small experiment myself and plotted the distributions. For each plot, the corresponding
β
is given on top and the normal distribution has
σ
=
log
2
2
π
β
(for the notebook with the code see this link).
[Plots for different
β
s]
As you can see, it is only close for
β
≈
1
. Two solutions exist: either provide a theorem with
≤
or give a rigorous discussion on the cases where only
≈
or even > holds.
The other limitation of Theorem 1 is that it only holds for one-layer ReLU networks. I would find a short discussion helpful why it does not hold for n-layer ReLU networks. In addition, it should be emphasized throughout the paper that Theorem 1 is only for one-layer networks. For example, in the last paragraph of the introduction:
We present an analysis that sheds light on the previously-observed phenomeon of robust interpretability, showing that alignment between the normal vectors of decision boundaries and models’ gradients is a key ingredient (Proposition 1, Theorem 1)
Please make clear in that sentence and others that Theorem 1 only addresses one-layer networks.
At the end of section 3.2, Figure 10 is referenced as empirical validation of Theorem 1, but I do not understand the figure and caption:
distances in logarithm between SG and BSG against different standard deviations σ of the Gaussian noise. Results are computed on ResNet50. Notice the first column corresponds to σ = 0.
Please, clarify what you want to evaluate with this figure, e.g. the first column says
σ
=
0.15
.
Evaluation
I think the evaluation of the normality to the decision boundary can be improved. In Figure 3, pairs of gradient attribution method and the corresponding boundary attributions (e.g. IG vs. BIG) are compared to evaluate how normal the attributions are. However, why not measure the normality in the feature space
z
(
x
)
directly.
z
(
x
)
is defined such that
f
i
(
x
)
=
w
i
T
z
(
x
)
. We know that
w
i
must be normal to the decision boundary, as shown in Figure 2a. The corresponding change in z-space of an attribution
g
(
x
)
would be
Δ
z
=
z
(
x
)
−
z
(
x
+
α
g
(
x
)
)
. Now, we can measure the similarity of the normal
w
i
and the different attributions: just compute
cos
(
Δ
z
,
w
i
)
for all the different attributions. This evaluation would relate the estimated directions in
x
-space to the ground-truth normals in
z
-space. The current evaluation of attributions methods against their boundary equivalent cannot provide such a ground-truth reference.
The evaluation using the ground-truth bounding boxes is a good proxy task and seems to be executed correctly. It might make sense to only use images where the bounding box covers less than 50% of the image, as done in (Schulz et al., 2020). The attribution method in (Schulz et al., 2020) might also be an interesting candidate for the evaluation as it was also able to outperform int.grad. and smooth grad. I would also suggest focusing on one or two metrics for the bounding box task instead of four.
I would also encourage the authors to include the sanity check for weight reinitialization (Adebayo et al., 2018). It is easy to implement and should be passed by any new attribution method.
Limitations
While I do not think that the paper requires a human-subject evaluation, its lack should be mentioned in the limitation section. Also the saliency maps look more concentrated, would humans actually profit from it? Even if there is a significant difference, would you expect a large effect size? Please also list that theorem 1 is only for one-layered networks in the limitations. Limitation 2 (not applicable to perturbation attributions) arises from focus of the paper and I think there is not need to mention it.
Minor Comemnts:
"In fact, the fact" (page 4)
smaller difference between the difference between (page 7)
Thefore (page 7)
Lost clause: "It is naturally to treat " (page 8)
It should be Table 3 and not Figure 3
IamgeNet (page 6)
I think it should be "The RHS of the above equation is Smoothed Gradient" (page 15)
References:
(Schulz et al., 2020) https://openreview.net/forum?id=S1xWh1rYwB
(Adebayo et al., 2018) https://arxiv.org/abs/1810.03292
After Rebuttal Update
The authors were able to rectify their proof and also provided details to my other questions. While the initial submission was a clear reject, the rebuttal was well done. I agree with the concerns of the others reviewers about novelty. Overall, I increased my rating to marginal above acceptance. |
ICLR | Title
Robust Models Are More Interpretable Because Attributions Look Normal
Abstract
Recent work has found that adversarially-robust deep networks used for image classification are more interpretable: their feature attributions tend to be sharper, and are more concentrated on the objects associated with the image’s groundtruth class. We show that smooth decision boundaries play an important role in this enhanced interpretability, as the model’s input gradients around data points will more closely align with boundaries’ normal vectors when they are smooth. Thus, because robust models have smoother boundaries, the results of gradientbased attribution methods, like Integrated Gradients and DeepLift, will capture more accurate information about nearby decision boundaries. This understanding of robust interpretability leads to our second contribution: boundary attributions, which aggregate information about the normal vectors of local decision boundaries to explain a classification outcome. We show that by leveraging the key factors underpinning robust interpretability, boundary attributions produce sharper, more concentrated visual explanations—even on non-robust models.
1 INTRODUCTION
Feature attribution methods are widely used to explain the predictions of neural networks (Binder et al., 2016; Dhamdhere et al., 2019; Fong & Vedaldi, 2017; Leino et al., 2018; Montavon et al., 2015; Selvaraju et al., 2017; Shrikumar et al., 2017; Simonyan et al., 2013; Smilkov et al., 2017; Springenberg et al., 2014; Sundararajan et al., 2017). By assigning an importance score to each input feature of the model, these techniques help to focus attention on parts of the data most responsible for the model’s observed behavior. Recent work (Croce et al., 2019; Etmann et al., 2019) has observed that feature attributions in adversarially-robust image models, when visualized, tend to be more interpretable—the attributions correspond more clearly to the discriminative portions of the input.
One way to explain the observation relies on the fact that robust models do not make use of nonrobust features (Ilyas et al., 2019) whose statistical meaning can change with small, imperceptible changes in the source data. Thus, by using only robust features to predict, these models naturally tend to line up with visibly-relevant portions of the image. Etmann et al. take a different approach, showing that the gradients of robust models’ outputs more closely align with their inputs, which explains why attributions on image models are more visually interpretable.
In this paper, we build on this geometric understanding of robust interpretability. With both analytical (Sec. 3) and empirical (Sec. 5) results, we show that the gradient of the model with respect to its input, which is the basic building block of all gradient-based attribution methods, tends to be more closely aligned with the normal vector of a nearby decision boundary in robust models than in “normal” models. Leveraging this understanding, we propose Boundary-based Saliency Map (BSM) and Boundary-based Integrated Gradient (BIG), two variants of boundary attributions (Sec. 4), which base attributions on information about nearby decision boundaries (see an illustration in Fig. 1a). While BSM provides theoretical guarantees in the closed-form, BIG generates both quantitatively and qualitatively better explanations. We show that these methods satisfy several desireable formal properties, and that even on non-robust models, the resulting attributions are more focused (Fig. 1b) and less sensitive to the “baseline” parameters required by some attribution methods.
To summarize, our main contributions are as follows. (1) We present an analysis that sheds light on the previously-observed phenomeon of robust interpretability showing that alignment between the normal vectors of decision boundaries and models’ gradients is a key ingredient (Proposition 1).
In particular, we derive a closed-form result for one-layer networks (Theorem 1) and empirically validate the take-away of our theorem generalizes to deeper networks. (2) Motivated by our analysis, we introduce boundary attributions, which leverage the connection between boundary normal vectors and gradients to yield explanations for non-robust models that carry over many of the favorable properties that have been observed of explanations on robust models. (3) We empirically demonstrate that one such type of boundary attribution, called Boundary-based Integrated Gradients (BIG), produces explanations that are more accurate than prior attribution methods (relative to ground-truth bounding box information), while mitigating the problem of baseline sensitivity that is known to impact applications of Integrated Gradients Sundararajan et al. (2017) (Section 6).
2 BACKGROUND
We begin by introducing our notations. Throughout the paper we use italicized symbols x to denote scalar quantities and bold-face x to denote vectors. We consider neural networks with ReLU as activations prior to the top layer, and a softmax activation at the top. The predicted label for a given input x is given by F (x) = argmaxc fc(x),x ∈ Rd, where F (x) is the predicted label and fi(x) is the output on the class i. As the softmax layer does not change the ranking of neurons in the top layer, we will assume that fi(x) denotes the pre-softmax score. Unless otherwise noted, we use ||x|| to denote the `2 norm of x, and the `2 neighborhood centered at x with radius as B(x, ).
Explainability. Feature attribution methods are widely-used to explain the predictions made by DNNs, by assigning importance scores for the network’s output to each input feature. Conventionally, scores with greater magnitude indicate that the corresponding feature was more relevant to the predicted outcome. We denote feature attributions by z = g(x, f), z,x ∈ Rd. When f is clear from the context, we simply write g(x). While there is an extensive and growing literature on attribution methods, our analysis will focus closely on the popular gradient-based methods, Saliency Map (Simonyan et al., 2013), Integrated Gradient (Sundararajan et al., 2017) and Smooth Gradient (Smilkov et al., 2017), shown in Defs 1-3.
Definition 1 (Saliency Map (SM)) The Saliency Map gS(x) is given by gS(x) := ∂f(x)∂x .
Definition 2 (Integrated Gradient (IG)) Given a baseline input xb, the Integrated Gradient gIG(x;xb) is given by gIG(x;xb) := (x− xb) ∫ 1 0 ∂f((x−xb)t+xb) ∂x dt.
Under review as a conference paper at ICLR 2022
#$
#$
%
%
!! !"
"
#$ %
"!
""
"#
Definition 3 (Smooth Gradient (SG)) Given a zero-centered Gaussian distributionN with a standard deviation σ, the Smooth Gradient gSG(x;σ) is given by gSG(x;σ) := E ∼N (0,σ2I) ∂f(α+ )∂x .
Besides, we will also include results from DeepLIFT (Shrikumar et al., 2017) and grad × input (element-wise multiplication between Saliency Map and the input) (Simonyan et al., 2013) in our empirical evaluation. As we show in Section 3.2, Defs 1-3 satisfy axioms that relate to the local linearity of ReLU networks, and in the case of randomized smoothing (Cohen et al., 2019), their robustness to input perturbations. We further discuss these methods relative to others in Sec. 7.
Robustness. Two relevant concepts about adversarial robustness will be used in this paper: prediction robustness that the model’s output label remains unchanged within a particular `p norm ball and attribution robustness that the feature attributions are similar within the same ball. Recent work has identified the model’s Lipschitz continuity as a bridge between these two concepts (Wang et al., 2020c) and some loss functions in achieving prediction robustness also bring attribution robustness (Chalasani et al., 2020). We refer to robustness as prediction robustness if not otherwise noted.
3 EXPLAINABILITY, DECISION BOUNDARIES, AND ROBUSTNESS
In this section, we begin by discussing the role of decision boundaries in constructing explanations of model behavior via feature attributions. We first illustrate the key relationships in the simpler case of linear models, which contain exactly one boundary, and then generalize to piecewise-linear classifiers as they are embodied by deep ReLU networks. We then show how local robustness causes attribution methods to align more closely with nearby decision boundaries, leading to explanations that better reflect these relationships.
3.1 ATTRIBUTIONS FOR LINEAR MODELS
Consider a binary classifier C(x) = sign(w>x + b) that predicts a label {−1, 1} (ignoring “tie” cases where C(x) = 0, which can be broken arbitrarily). In its feature space, C(x) is a hyperplane H that separates the input space into two open half-spaces S1 and S2 (see Fig. 2a). Accordingly, the normal vector n̂ of the decision boundary is the only vector that faithfully explains the model’s classification while other vectors, while they may describe directions that lead to positive changes in the model’s output score, are not faithful in this sense (see v in Fig. 2a for an example). In practice, to assign attributions for predictions made by C, SM, SG, and the integral part of IG (see Sec. 2) return a vector characterized by z = k1n̂ + k2 (Ancona et al., 2018), where k1 6= 0 and k2 ∈ R, regardless of the input x that is being explained. In other words, these methods all measure the importance of features by characterizing the model’s decision boundary, and are equivalent up to the scale and position of n̂.
3.2 GENERALIZING TO PIECEWISE-LINEAR BOUNDARIES
In the case of a piecewise-linear model, such as a ReLU network, the decision boundaries comprise a collection of hyperplane segments that partition the feature space, as in H1, H2 and H3 in the example shown in Figure 2b. Because the boundary no longer has a single well-defined normal, one intuitive way to extend the relationship between boundaries and attributions developed in the previous section is to capture the normal vector of the closest decision boundary to the input being explained. However, as we show in this section, the methods that succeeded in the case of linear models (SM, SG, and the integral part of IG) may in fact fail to return such attributions in the more general case of piecewise-linear models, but local robustness often remedies this problem. We begin by reviewing key elements of the geometry of ReLU networks (Jordan et al., 2019).
ReLU activation polytopes. For a neuron u in a ReLU network f(x), we say that its status is ON if its pre-activation u(x) ≥ 0, otherwise it is OFF. We can associate an activation pattern denoting the status of each neuron for any point x in the feature space, and a half-space Au to the activation constraint u(x) ≥ 0. Thus, for any point x the intersection of the half-spaces corresponding to its activation pattern defines a polytope P (see Fig. 2b), and within P the network is a linear function such that ∀x ∈ P, f(x) = w>Px + bP , where the parameters wp and bP can be computed by differentiation (Fromherz et al., 2021). Each facet of P (dashed lines in Fig. 2b) corresponds to a boundary that “flips” the status of its corresponding neuron. Similar to activation constraints, decision boundaries are piecewise-linear because each decision boundary corresponds to a constraint fi(x) ≥ fj(x) for two classes i, j (Fromherz et al., 2021; Jordan et al., 2019). Gradients might fail. Saliency maps, which we take to be simply the gradient of the model with respect to its input, can thus be seen as a way to project an input onto a decision boundary. That is, a saliency map is a vector that is normal to a nearby decision boundary segment. However, as others have noted, a saliency map is not always normal to any real boundary segment in the model’s geometry (see the left plot of Fig. 2c), because when the closest boundary segment is not within the activation polytope containing x, the saliency map will instead be normal to the linear extension of some other hyperplane segment (Fromherz et al., 2021). In fact, iterative gradient descent typically outperforms the Fast Gradient Sign Method (Goodfellow et al., 2015) as an attack demonstrates that this is often the case.
When gradients succeed. While saliency maps may not be the best approach in general for capturing information about nearby segments of the model’s decision boundary, there are cases in which it serves as a good approximation. Recent work has proposed using the Lipschitz continuity of an attribution method to characterize the difference between the attributions of an input x and its neighbors within a `p ball neighborhood (Def. 4) (Wang et al., 2020c). This naturally leads to Proposition 1, which states that the difference between the saliency map at an input and the correct normal to the closest boundary segment is bounded by the distance to that segment.
Definition 4 (Attribution Robustness) An attribution method g(x) is (λ, δ)-locally robust at the evaluated point x if ∀x′ ∈ B(x, δ), ||g(x′)− g(x)|| ≤ λ||x′ − x||.
Proposition 1 Suppose that f has a (λ, δ)-robust saliency map gS at x, x′ is the closest point on the closest decision boundary segment to x and ||x′ − x|| ≤ δ, and that n is the normal vector of that boundary segment. Then ||n− gS(x)|| ≤ λ||x− x′||.
Proposition 1 therefore provides the following insight: for networks that admit robust attributions (Chen et al., 2019; Wang et al., 2020c), the saliency map is a good approximation to the boundary vector. As prior work has demonstrated the close correspondence between robust prediction and robust attributions (Wang et al., 2020c; Chalasani et al., 2020), this in turn suggests that explanations on robust models will more closely resemble boundary normals.
As training robust models can be expensive, and may not come with guarantees of robustness, post-processing techniques like randomized smoothing (Cohen et al., 2019), have been proposed as an alternative. Dombrowski et al. (2019) noted that models with softplus activations (y = 1/β log(1+exp (βx))) approximate smoothing, and in fact give an exact correspondence for singlelayer networks. Combining these insights, we arrive at Theorem 1, which suggests that the saliency map on a smoothed model approximates the closest boundary normal vector well; the similarity is inversely proportional to the standard deviation of the noise used to smooth the model.
Theorem 1 Let m(x) = ReLU(Wx) be a one-layer network and when using randomized smoothing, we writemσ(x). Let g(x) be the SM formσ(x) and suppose ∀x′′ ∈ B(x, ||x−x′||), ||g(x′′)|| ≥ c where x′ is the closest adversarial example, we have the following statement holds: ||g(x) − g(x′)|| ≤ λ where λ is monotonically decreasing w.r.t σ.
Theorem 1 suggests that when randomized smoothing is used, the normal vector of the closest decision boundary segment and the saliency map are similar, and this similarity increases with the smoothness of the model’s boundaries. We think the analytical form for deeper networks exists but its expression might be unnecessarily complex due that we need to recursively apply ReLU before computing the integral (i.e., the expectation). The analytical result above for one-layer network and empirical validations for deeper nets in Figure 11, if taken together, shows that attributions and boundary-based attributions are more similar in a smoothed model.
4 BOUNDARY-BASED ATTRIBUTION
Without the properties introduced by robust learning or randomized smoothing, the local gradient, i.e. saliency map, may not be a good approximation of decision boundaries. In this section, we build on the insights of our analysis to present a set of novel attribution methods that explicitly incorporate the normal vectors of nearby boundary segments. Importantly, these attribution methods can be applied to models that are not necessarily robust, to derive explanations that capture many of the beneficial properties of explanations for robust models.
Using the normal vector of the closest decision boundary to explain a classifier naturally leads to Definition 5, which defines attributions directly from the normal of the closest decision boundary.
Definition 5 (Boundary-based Saliency Map (BSM)) Given f and an input x, we define Boundary-based Saliency MapBS(x) as follows: BS(x) def = ∂fc(x
′)/∂x′, where x′ is the closest adversarial example to x, i.e. c = F (x) 6= F (x′) and ∀xm.||xm−x|| < ||x′−x|| → F (x) = F (xm).
Incorporating More Boundaries. The main limitation of using Definition 5 as a local explanation is obvious: the closest decision boundary only captures one segment of the entire decision surface. Even in a small network, there will be numerous boundary segments in the vicinity of a relevant point. Taking inspiration from Integrated Gradients, Definition 6 proposes the Boundary-based Integrated Gradient (BIG) by aggregating the attributions along a line between the input and its closest boundary segment.
Definition 6 (Boundary-based Integrated Gradient(BIG)) Given f , Integrated Gradient gIG and an input x, we define Boundary-based Integrated Gradient BS(x) as follows: BIG(x) := gIG(x;x′), where x is the nearest adversarial example to x, i.e. c = F (x) 6= F (x′) and ∀xm.||xm − x|| < ||x′ − x|| → F (x) = F (xm).
Geometric View of BIG. BIG explores a linear path from the boundary point to the target point. Because points on this path are likely to traverse different activation polytopes, the gradient of intermediate points used to compute gIG are normals of linear extensions of their local boundaries. As the input gradient is identical within a polytope Pi, the aggregate computed by BIG sums each gradient wi along the path and weights it by the length of the path segment intersecting with Pi. In other words, one may view IG as an exploration of the model’s global geometry that aggregates all boundaries from a fixed reference point, whereas BIG explores the local geometry around x. In the former case, the global exploration may reflect boundaries that are not particularly relevant to model’s observed behavior at a point, whereas the locality of BIG may aggregate boundaries that are more closely related (a visualization is shown in Fig. 1a).
Finding nearby boundaries. Finding the exact closest boundary segment is identical to the problem of certifying local robustness (Fromherz et al., 2021; Jordan et al., 2019; Kolter & Wong, 2018; Lee et al., 2020; Leino et al., 2021b; Tjeng et al., 2019; Weng et al., 2018), which is NP-hard for piecewise-linear models (Sinha et al., 2020). To efficiently find an approximation of the closest boundary segment, we leverage and ensemble techniques for generating adversarial examples, i.e. PGD (Madry et al., 2018), AutoPGD (Croce & Hein, 2020) and CW (Carlini & Wagner, 2017), and use the closest one found given a time budget. The details of our implementation are discussed in Section 5, where we show that this yields good results in practice.
5 EVALUATION
In this section, we first validate that the attribution vectors are more aligned to normal vectors of nearby boundaries in robust models(Fig. 3a). We secondly show that boundary-based attributions provide more “accurate” explanations – attributions highlight features that are actually relevant to the label – both visually (Fig. 4 and 5) and quantitatively (Table 1). Finally, we show that in a standard model, whenever attributions more align with the boundary attributions, they are more “accurate”.
General Setup. We conduct experiments over two data distributions, ImageNet (Russakovsky et al., 2015) and CIFAR-10 (Krizhevsky et al.). For ImageNet, we choose 1500 correctly-classified images from ImageNette (Howard), a subset of ImageNet, with bounding box area less than 80% of the original source image. For CIFAR-10, We use 5000 correctly-classified images. All standard and robust deep classifiers are ResNet50. All weights are pretrained and publicly available (Engstrom et al., 2019). Implementation details of the boundary search (by ensembling the results of PGD, CW and AutoPGD) and the hyperparameters used in our experiments, are included in Appendix B.2.
5.1 ROBUSTNESS→ BOUNDARY ALIGNMENT
In this subsection, we show that SM and IG better align with the normal vectors of the decision boundaries in robust models. For SM, we use BSM as the normal vectors of the nearest decision boundaries and measure the alignment by the `2 distance between SM and BSM following Proposition 1. For IG, we use BIG as the aggregated normal vectors of all nearby boundaries because
IG also incorporates more boundary vectors. Recently, Pan et al. (2021) also provides Adversarial Gradient Integral (AGI) as an alternative way of incorporating the boundary normal vectors into IG. We first use both BIG and AGI to measure how well IG aligns with boundary normals and later compare them in Sec. 5.2, followed by a formal discussion in Sec. 7.
Aggregated results for standard models and robust models are shown in Fig. 3a. It shows that adversarial training with bigger encourages a smaller difference between attributions and their boundary variants. Particularly, using `2 norm and setting = 3.0 are most effective for ImageNet compared to `∞ norm bound. One possible explanation is that the `2 space is special because training with `∞ bound may encourage the gradient to be more Lipschitz in `1 because of the duality between the Lipschitzness and the gradient norm, whereas `2 is its own dual.
5.2 BOUNDARY ATTRIBUTION→ BETTER LOCALIZATION
In this subsection, we show boundary attributions (BSM, BIG and AGI) better localize relevant features. Besides SM, IG and SG, we also focus on other baseline methods including Grad × Input (GTI) (Simonyan et al., 2013) and DeepLIFT (rescale rule only) (Shrikumar et al., 2017) that are reported to be more faithful than other related methods (Adebayo et al., 2018; 2020).
In an image classification task where ground-truth bounding boxes are given, we consider features within a bounding box as more relevant to the label assigned to the image. Our evaluation is performed over ImageNet only because no bounding box is provided for CIFAR-10 data. The metrics used for our evaluation are: 1) Localization (Loc.) (Chattopadhyay et al., 2017) evaluates the intersection of areas with the bounding box and pixels with positive attributions; 2) Energy Game (EG) (Wang et al., 2020a) instead computes the portion of attribute scores within the bounding box. While these two metrics are common in the literature, we propose the following additional metrics: 3)Positive Percentage (PP) evaluates the portion of positive attributions in the bounding box because a naive assumption is all features within bounding boxes are relevant to the label (we will revisit this assumption in Sec. 6); and 4) Concentration (Con.) sums the absolute value of attribution scores over the distance between the “mass” center of attributions and each pixel within the bounding box. Higher Loc., EG, PP and Con. are better results. We provide formal details for the above metrics in Appendix B.1.
We show the average scores for ResNet50 models in Table 1 where the corresponding boxplots can be found in Appendix B.4. BIG is noticeably better than other methods on Loc. EG, PP and Con. scores for both robust and standard models and matches the performance of SG on EG for a standard model. Notice that BSM is not significantly better than others in a standard model, which confirms our motivation of BIG – that we need to incorporate more nearby boundaries because a single boundary may not be sufficient to capture the relevant features.
We also measure the correlation between the alignment of SM and BSM with boundary normals and the localization abilities, respectively. For SM, we use BSM to represent the normal vectors of the boundary. For IG, we use AGI and BIG. For each pair X-Y in {SM-BSM, IG-AGI, IG-BIG}, we measure the empirical correlation coefficient between −||X− Y ||2 and the localization scores of X in a standard ResNet50 and the result is shown in Fig. 3b. Our results suggest that when the attribution methods better align with their boundary variants, they can better localize the relevant features in terms of the Loc. and EG. However, PP and Con. have weak and even negative correlations. One possible explanation is that the high PP and Con. of BIG and AGI compared to IG (as shown in Table 1) may also come from the choice of the reference points. Namely, compared to a zero vector, a reference point on the decision boundary may better filter out noisy features.
We end our evaluations by visually comparing the proposed method, BIG, against all other attribution methods for the standard ResNet50 in Fig. 4 and for the robust ResNet50 in Fig. 5, which demonstrates that BIG can easily and efficiently localize features that are relevant to the prediction. More visualizaitons can be found in the Appendix E.
Summary. Taken together, we close the loop and empirical show that standard attributions in robust models are visually more interpretable because they better capture the nearby decision boundaries. Therefore, the final take-away from our analytical and empirical results is if more resources are devoted to training robust models, effectively identical explanations can be obtained using much less costly standard gradient-based methods, i.e. IG.
6 DISCUSSION
Baseline Sensitivity. It is natural to treat that BIG frees users from the baseline selection in explaining non-linear classifiers. Empirical evidence has shown that IG is sensitive to the baseline inputs (Sturmfels et al., 2020). We compare BIG with IG when using different baseline inputs, white or black images. We show an example in Fig 6b. For the first two images, when using the baseline input as the opposite color of the dog, more pixels on dogs receive non-zero attribution scores. Whereas backgrounds always receive more attribution scores when the baseline input has the same color as the dog. This is because gIG(x)i ∝ (x− xb)i (see Def. 2) that greater differences in the input feature and the baseline feature can lead to high attribution scores. The third example further questions the readers using different baselines in IG whether the network is using the white dog to predict Labrador retriever. We demonstrate that conflicts in IG caused by the sensitivity to the baseline selection can be resolved by BIG. BIG shows that black dog in the last row is more important for predicting Labrador retriever and this conclusion is further validated by our counterfactual experiment in Appendix D. Overall, the above discussion highlights that BIG is significantly better than IG in reducing the non-necessary sensitivity in the baseline selection.
Limitations. We identify two limitations of the work. 1) Bounding boxes are not perfect groundtruth knowledge for attributions. In fact, we find a lot of examples where the bounding boxes either fail to capture all relevant objects or are too big to capture relevant features only. Fixing mislabeled bounding boxes still remain an open question and should benefit more expandability research in general. 2) Our analysis only targets on attributions that are based on end-to-end gradient computations. That is, we are not able to directly characterize the behavior of perturbation-based approaches, i.e. Mask (Fong & Vedaldi, 2017), and activation-based approaches, i.e. GradCAM (Selvaraju et al., 2017) and Feature Visualization (Olah et al., 2017).
7 RELATED WORK
Ilyas et al. (2019) shows an alternative way of explaining why robust models are more interpretable by showing robust models usually learn robust and relevant features, whereas our work serves as a geometrical explanation to the same empirical findings in using attributions to explain deep models. Our analysis suggests we need to capture decision boundaries in order to better explain classifiers,
whereas a similar line of work, AGI (Pan et al., 2021) that also involves computations of adversarial examples is motivated to find a non-linear path that is linear in the representation space instead of the input space compared to IG. Therefore, AGI uses PGD to find the adversarial example and aggregates gradients on the non-linear path generated by the PGD search. We notice that the trajectory of PGD search is usually extremely non-linear, complex and does not guarantee to return closer adversarial examples without CW or AutoPGD (see comparisons between boundary search approaches in Table B.2). We understand that finding the exact closest decision boundary is not feasible, but our empirical results suggest that the linear path (BIG) returns visually sharp and quantitative better results in localizing relevant features. Besides, a non-linear path should cause AGI fail to meet the symmetry axiom (Sundararajan et al., 2017) (see Appendix C for an example of the importance of symmetry for attributions). We further summarize the commons and differences in Table 6a.
In the evaluation of the proposed methods, we choose metrics related to bounding box over other metrics because for classification we are interested in whether the network associate relevant features with the label while other metrics (Adebayo et al., 2018; Ancona et al., 2017; Samek et al., 2016; Wang et al., 2020b; Yeh et al., 2019), e.g. infidelity (Yeh et al., 2019), mainly evaluates whether output scores are faithfully attributed to each feature. Our idea of incorporating boundaries into explanations may generalize to other score attribution methods, e.g. Distributional Influence (Leino et al., 2018) and DeepLIFT (Shrikumar et al., 2017). The idea of using boundaries in the explanation has also been explored by T-CAV (Kim et al., 2018), where a linear decision boundary is learned for the internal activations and associated with their proposed notion of concept.
When viewing our work as using nearby boundaries as a way of exploring the local geometry of the model’s output surface, a related line of work is NeighborhoodSHAP (Ghalebikesabi et al., 2021), a local version of SHAP (Lundberg & Lee, 2017). When viewing our as a different use of adversarial examples, some other work focuses on counterfactual examples (semantically meaningful adversarial examples) on the data manifold (Chang et al., 2019; Dhurandhar et al., 2018; Goyal et al., 2019).
8 CONCLUSION
In summary, we rethink the target question an explanation should answer for a classification task, the important features that the classifier uses to place the input into a specific side of the decision boundary. We find the answer to our question relates to the normal vectors of decision boundaries in the neighborhood and propose BSM and BIG as boundary attribution approaches. Empirical evaluations on STOA classifiers validate that our approaches provide more concentrated, sharper and more accurate explanations than existing approaches. Our idea of leveraging boundaries to explain classifiers connects explanations with the adversarial robustness and help to encourage the community to improve model quality for explanation quality.
A THEOREMS AND PROOFS
A.1 PROOF OF PROPOSITION 1
Proposition 1 Suppose that f has a (λ, δ)-robust saliency map gS at x, x′ is the closest point on the closest decision boundary segment to x and ||x′ − x|| ≤ δ, and that n is the normal vector of that boundary segment. Then ||n− gS(x)|| ≤ λ||x− x′||. To compute n can be efficiently computed by taking the derivatice of the model’s output w.r.t to the point that is on the decision boundary such that n = ∂f(x
′) ∂x′ and ∀xm ∈ R
d, F (xm) = F (x) if ||xm − x|| ≤ ||x′ − x||. Because we assume ||x − x′|| ≤ δ, and the model has (λ, δ)-robust Saliency Map, then by Def. 4 we have
||n− gS(x)|| ≤ λ||x− x′||
A.2 PROOF OF THEOREM 1
Theorem 1 Let m(x) = ReLU(Wx) be a one-layer network and when using randomized smoothing, we write mσ(x). Let g(x) be the SM for mσ(x) and suppose ∀x′′ ∈ B(x, ||x − x′||), ||g(x′′)|| ≥ c where x′ is the closest adversarial example, we have the following statement holds: ||g(x)− g(x′)|| ≤ λ where λ is monotonically decreasing w.r.t σ.
Proof:
We begin our proof by firstly introducing Randomized Smoothing.
Definition 7 (Randomized Smoothing (Cohen et al., 2019)) Suppose F (x) = argmaxc fc(x), the smoothed classifier G(x) is defined as
G(x) := argmax c Pr [F (x+ ) = c] (1)
where ∼ N (0, σ2I)
Now the rest of the proof of is three-fold: 1) firstly we will show that there exist a non-linear activation function Er(x) such that the output of the smoothed ReLU network mσ(x) is equivalent when replacing the ReLU activation with Er activation; 2) secondly derive the difference between the saliency map of the network with Er activation; and 3) lastly, we show that the difference between SM and BSM of the network with Er activation is bounded, which is inversely proportional to the standard deviation used to create the smoothed ReLU network mσ(x).
(1) Step I: Error activation (Er) function and randomized smoothing. 1
Randomized Smoothing creates a smoothed model that returns whichever the label that the base classifier most likely to return under the perturbation generated by the Gaussian noise. Now we take a look at the output of each class under the Gaussian noise. Consider yi is the output of the i-th class of the network ReLU(Wx), that is
yi = E ∼N (0,σ2I)ReLU(w>i (x+ )) (2)
To simplify the notation, we denote E ∼N (0,σ2I) as E. We expand Equation (2):
yi = E [ ReLU(w>i x+w > i ) ] = E [ReLU(u+ ′)] (3)
where we denote u = w>i x and ′ = w>i . u is a scalar and ′ follows a zero-centered univariate Gaussian with the standard deviation s ∝ σ because the dot production between the constant weight vector wi and the random vector can be seen as a linear combination of each dimension of and the covariance between each dimension of is 0 for the Gaussian noise used for randomized smoothing
1We appreciate the discussion with the author Pan Kessel of Dombrowski et al. (2019) for the derivations from Equation (6) to (7)
in the literature (Cohen et al., 2019). By expanding the expectation symbol to its integral form, we obtain:
yi = 1
s √ 2π ∫ ∞ −∞ exp(− ′2 2s2 )ReLU(u+ ′)d ′ (4)
Let τ = u+ ′ and notice that ReLU(τ) = 0 if τ < 0, the equation above can be rewritten as:
yi = 1
s √ 2π ∫ ∞ 0 exp(− (τ − u) 2 2s2 )τdτ (5)
= 1√ 2π exp(− u 2 2s2 )s+ u 2
[ 1 + Erf(
u√ 2s )
] (6)
(7)
where Erf is the error function defined as Erf(x) = 2√ π ∫ x 0 exp(−t2)dt. We therefore define an Er activation for an input u with the standard deviation s as
Er(u; s) = 1√ 2π exp(− u 2 2s2 )s+ u 2
[ 1 + Erf(
u√ 2s )
] (8)
and we show that
yi = E ∼N (0,σ2I) [ ReLU(w>i (x+ )) ] = Er(w>i x; s(σ)) (9)
That is, to analyze the gradient of the output for a smoothed model w.r.t the input, we can alternatively analyze the gradient of an equivalent Er network. We plot three examples of the Er activations in Fig. 7 for the readers to see what does the function look like.
2) Step II: the Saliency Map for an Er network.
By the definition of Saliency map (Def. 1) and the chain rule, we have:
SM(x) = ∂yi ∂x = ∂yi ∂u ∂u ∂x (Let u = w>i x) (10)
= ∂
∂u (Er(u; s)) ·wi (11)
= 1
2
[ 1 + Erf(
u√ 2s )
] ·wi (12)
The transition between Equation (11) to (12) is based on the fact that the derivative of Erf(x) is 2√ π exp(−x2).
3) Step III: the difference between SM and BSM for an Er network.
Let x′ be the closest point on the decision boundary for the smoothed classifiermσ and ||x−x′|| = r (for the closed-form expression of r, see Cohen et al. (2019)). Based on the definition of BSM, we have
BSM(x) = ∂yi(x
′)
∂x′ =
1
2
[ 1 + Erf(
u′√ 2s )
] ·wi, u′ = w>i x′ (13)
The difference between SM and BSM therefore is computed as
||BSM(x)− SM(x)|| = ||1 2
[ 1 + Erf(
u′√ 2s )
] ·wi − 1
2
[ 1 + Erf(
u√ 2s )
] ·wi|| (14)
= 1 2 |Erf( u ′ √ 2s )− Erf( u√ 2s )| · ||wi|| (15)
≤ 1 2
[ |Erf( u
′ √ 2s )|+ |Erf( u√ 2s
)| ] · ||wi|| (Triangle Inequality) (16)
We notice that the u′ is bounded because u′ = w>i x ′ ≤ ||wi|| · ||x′|| ≤ ||wi|| · (||wi|| + r) and similarly for u such that u = w>i x ≤ ||wi|| · (||x||+ r). Because Erf function is increasing w.r.t the input and s > 0, we arrive at the following inequality:
||BSM(x)− SM(x)|| ≤ λ (17)
where
λ = Erf( ||wi|| · (||x||+ r)√
2s ) · ||wi|| (18)
We take the absolute symbol out because the output of an Erf is positive when its input is positive. Now, given that ||wi||, r and ||x|| are constants when for a given input x, the upper-bound Erf( ||wi||·(||x||+r)√
2s ) · ||wi|| is monotonically increasing as s decreases. From the Step I, we know
that s ∝ σ, therefore we prove there exist an upper-bound λ of the difference between the SM and BSM for a smoothed classifier and λ is monotonically decreasing w.r.t the standard deviation of the Gaussian noise.
B EXPERIMENT DETAILS AND ADDITIONAL RESULTS
B.1 METRICS WITH BOUNDING BOXES
We will use the following extra notations in this section. Let X , Z and U be a set of indices of all pixels, a set of indices of pixels with positive attributions, and a set of indices of pixels inside the bounding box for a target attribution map g(x). We denote the cardinality of a set S as |S|.
Localization (Loc.) (Chattopadhyay et al., 2017) evaluates the intersection of areas with the bounding box and pixels with positive attributions.
Definition 8 (Localization) For a given attribution map g(x), the localization score (Loc.) is defined as
Loc := |Z ∩ U |
|U |+ |Z ∩ (X \ U)| (19)
Energy Game (EG) (Wang et al., 2020a) instead evaluates computes the portion of attribute scores within the bounding box.
Definition 9 (Energy Game) For a given attribution map g(x), the energy game EG is defined as
EG := ∑ i∈Z∩U g(x)i∑
i∈X max(g(x)i, 0) (20)
Positive Percentage (PP) evaluates the sum of positive attribute scores over the total (absolute value of) attribute scores within the bounding box.
Definition 10 (Positive Percentage) Let V be a set of indices pf all pixels with negative attribution scores, for a given attribution map g(x), the positive percentage PP is defined as
PP := ∑ i∈Z∩U g(x)i∑
i∈Z∩U g(x)i − ∑ i∈V ∩U g(x)i
(21)
Concentration (Con.) evaluates the sum of weighted distances by the “mass” between the “mass” center of attributions and each pixel within the bounding box. Notice that the computation of cx and cy can be computed with scipy.ndimage.center of mass. This definition encourages that pixels with high absolute value of attribution scores to be closer to the mass center.
Definition 11 (Concentration) For a given attribution map g(x), the concentration Con. is defined as follws
Con. := ∑ i∈U ĝ(x)i/ √ (ix − cx)2 + (iy − cy)2 (22)
where ĝ is the normalized attribution map so that ĝi = gi/ ∑ i∈U |gi|. ix, iy are the coordinates of the pixel and
cx = ∑ i∈U ixĝ(x)i∑ i∈U ĝ(x)i , cy = ∑ i∈U iy ĝ(x)i∑ i∈U ĝ(x)i
(23)
Besides metrics related to bounding boxes, there are other metrics in the literature used to evaluate attribution methods (Adebayo et al., 2018; Ancona et al., 2017; Samek et al., 2016; Wang et al., 2020b; Yeh et al., 2019). We focus on metrics that use provided bounding boxes, as we believe that they offer a clear distinction between likely relevant features and irrelevant ones.
B.2 IMPLEMENTING BOUNDARY SEARCH
Our boundary search uses a pipeline of PGDs, CW and AutoPGD. Adversarial examples returned by each method are compared with others and closer ones are returned. If an adversarial example is not found, the pipeline will return the point from the last iteration of the first method (PGDs in our case). Hyper-parameters for each attack can be found in Table 2. The implementation of PGDs and CW are based on Foolbox (Rauber et al., 2020; 2017) and the implementation of AutoPGD is based
on the authors’ public repository2 (we only use apgd-ce and apgd-dlr losses for efficiency reasons). All computations are done using a GPU accelerator Titan RTX with a memory size of 24 GB. Comparisons on the results of the ensemble of these three approaches are shown in Fig. 10a.
B.3 HYPER-PARAMETERS FOR ATTRIBUTION METHODS
All attributions are implemented with Captum (Kokhlikyan et al., 2020) and visualized with Trulens (Leino et al., 2021a). For BIG and IG, we use 20 intermediate points between the baseline and the input and the interpolation method is set to riemann trapezoid. For AGI, we base on the authors’ public repository3. The choice of hyper-paramters follow the default choice from the authors for ImageNet and we make minimal changes to adapt them to CIFAR-10 (see Fig. 10b).
To visualize the attribution map, we use the HeatmapVisualizer with blur=10, normalization type="signed max" and default values for other keyword arguments from Trulens.
B.4 DETAILED RESULTS ON LOCALIZATION METRICS
We show the average scores for each localizaiton metrics in Sec. 5. We also show the boxplots of the scores for each localization metrics in Fig. 8 for the standard ResNet50 model and Fig. 9 for the robust ResNet50 (`2|3.0). All higher scores are better results.
2https://github.com/fra31/auto-attack 3https://github.com/pd90506/AGI
B.5 ADDITIONAL COMPARISON WITH AGI
We additionally compare the localization ability of relevant features between BIG and AGI if we only use PGDs to return closest boundary points, that is we recursively increase the norm bound and perform PGD attack until the first time we succeed to find an adversarial point. We denote this approach as BIGp. Note that BIGp is still different from AGI by the type of the path, i.e. lines
and curves, over which the integral is performed. That is AGI also aggregates the path integral starting from a set of adversarial points found by the targeted PGD attack, where BIGp starts from the adversarial pointed returned by untargeted PGD attack. We use the same parameters for both PGD and AGI from Fig. 2 and we run the experiments over the same dataset used in Sec. 5.1. For reference, we also include the results of IG. The results are shown in Table. 3. We notice that after removing CW and AutoPGD, BIGp actually performs better than AGI, and even slightly better than BIG for the robust model. One reason to explain the tiny improvement from BIGp might be that for a robust network, the gradient at each iteration of the PGD attack is more informative and less noisy compared to a standard model so that the attack can better approximate the closest decision boundary. The results in Table. 3 therefore demonstrates that BIG and BIGp are able to localize relevant features better than AGI.
B.6 ADDITIONAL LOCALIZATION METRIC
Besides the localization metrics used in Sec. 5.1, we discuss an additional localization metric frequently used for evaluating attention and CAM-based explanations: Top1-Loc Choe & Shim (2019); Aggarwal et al. (2020). Top1-Loc is calculated as follows: an instance is considered as Top1-Loc correct given an attribution if 1) the prediction is Top1-correct; and 2) GT-Loc correct – namely, the IoU of the ground-truth bounding box and area highlighted by the attribution is more than 50 %. When only using the images that are Top1-correct, Top1-Loc reduces to GT-Loc. Top1-Loc is different from other localization metrics used for evaluating attribution methods because it takes the prediction behavior of the target model into the account, which in general is not an axiom when motivating a gradient-based attribution method. In the previous evaluations, we are only interested in images that the model makes correct Top1 predictions, in this section we will use the same images that are true-positives. In this case, Top1-Loc accuracy reduces to GT-Loc accuracy, and so we measure the GT-Loc directly. To determine the which part of the image is highlighted by the attribution, we compute a threshold for each attribution map and a pixel is considered within the highlight region if and only if the attribution score is higher than the threshold. For a given attribution map, we consider a threshold value t as the q-th percentile for the absolute values of attribution scores. We plot the GT-Loc accuracy against q in Fig. 13. We notice that attention-based and CAM-based attributions usually produce a cloud-like visualization because of the blurring technique or upsample layers used to compute the results. To ensure GT-Loc will work from gradient-based attributions we are interested in this paper, we also include results (Fig. 14) where we apply a Gaussian Blur (σ = 3.0) to the attribution map first before calculating the GT-Loc accuracy. The results are aggreated over 1500 images from ImageNette on a standard ResNet50 and a robust ResNet50, respectively. Higher GT-Loc scores are better.
Behavior of BIG. The results in Fig. 13 and 14 show that BIG is better than all other attributions on standard models excluding SG and uniformly better including SG on a robust model. Before we provide some explanations about the behaviors of SG (green curves) on standard models in the next paragraph, we also observe that: 1) blurring only changes the GT-Loc scores but not the overral rankings across attributions; 2) a threshold corresponding to a percentile near 40% provides the best GT-Loc scores for all methods; 3) gradient-based attributions generally provide worse GT-Loc (or Top1-Loc) scores compared to CAM-based and attention-based approaches in the literature Choe &
Shim (2019); Aggarwal et al. (2020), which is not surprising because gradient-based approaches are usually axiomatically-justified to be faithful to the model. Therefore, it is expected that the model will more or less learn spurious features from the input, which makes the gradient-based attributions noisy than attention and CAM-based ones. Therefore, when localizing relevant features, users may want to consult activation-based approaches, i.e. CAMs, but when debugging and ensuring the network learns less spurious and irrelevant features, users should instead use gradient-based approaches because of the axioms behind these approaches.
Behavior of SG in Standard Models. SG is uniformly better than all other approaches in terms of the Gt-Loc accuracies on a standard model, which is surprising but not totally unexpected. We beleive the reason behind this result is that, SG is actually the gradient from a smoothed counterpart of the standard model (see discussions near Theorem 1), which is more robust. Therefore, it does not seem to be an apple-to-apple comparison between SG and other approaches because it can be less faithful to the standard model – namely SG is more faithful to the smoothed classifier. That is very likely why SG is worse than BIG in Fig. 13b and 14b when the smoothing technique becomes marginal for improving the robustness for a model that has already been robustly trained.
B.7 SANITY CHECK FOR BIG
We perform Sanity Checks for BIG using Rank Order Correlations between the absolute values of BIGs when randomizing the weights from the top layer to the bottom (Adebayo et al., 2018). To ensure the output of the model does not become NaN, when randomizing the weights of each trainable layer, we ensure that we replace the weight matrix with a random matrix with the same norm as follows.
1 def _randomized_models(): 2 all_parameters = [] 3 for param in model.parameters(): 4 all_parameters.append(param) 5 for step, param in enumerate(all_parameters[::-1]): 6 random_w = torch.randn_like(param) 7 ## we make sure the randomized weights have the same norm to prevent the network to output nan results 8 param.data = torch.nn.parameter.Parameter( 9 random_w * torch.norm(param.data) / torch.norm(random_w.data))
10 if step % num_blocks == 0 or step == len(all_parameters): 11 yield model 12
For each iteration, we continuously replace randomized 5 layers in the reversed sequence returned by model.parameters() and the results are plotted in Fig. 15. We consider BIG passes the sanity check as the results are similar compared with the top row of Fig 4 in Adebayo et al. (2018).
B.8 ADDITIONAL EXPERIMENT WITH SMOOTHED GRADIENT
Theorem 1 demonstrates that for a one-layer network, as we increase the standard deviation σ of the Gaussian distribution used for creating the smoothed model mσ (Cohen et al., 2019), the difference between the saliency map and the boundary-based saliency map computed in mσ is bounded by a constant λ, which is monotonically decreasing w.r.t σ. That is, greater σ will produce a smoothed model, where the saliency map (SM) explanation of mσ is a good approximation for the boundarybased saliency map (BSM). However, as the depth of the deep network increases, a closed-form analysis may be difficult to derive. Therefore, in this section, we aim to empirically validate that the take-away from Theorem 1 should generalize to deeper networks.
Computing SM for mσ . One practical issue to compute any gradient-related explanations for the smoothed model mσ is that mσ is defined in an integral form, which can not be directly built with tf.keras. However, Theorem 2 shows that the smoothed gradient of the original model m is equivalent to the saliency map of the smoothed model mσ . Namely, the order of smoothing and integral is exchangeable when computing the gradient.
Theorem 2 (Proposition 1 from Wang et al. (2020c)) Suppose a model f(x) satisfies max |f(x)| <∞. For Smoothed Gradient gSG(x), we have
gSG(x) = ∂(f ~ q)(x)
∂x (24)
where q(x) = N (0, σ2I) and ~ denotes the convolution operation.
Computing BSM for mσ . Another practical issue is computing the decision boundary for a smoothed modelmσ is not defined in a deterministic way as randomized smoothing provides a probabilistic guarantee. In this paper, we do the following steps to approximate the decision boundary of a smoothed model. To generate the adversarial examples for the smoothed classifier of ResNet50 with randomized smoothing, we need to compute back-propagation through the noises. The noise sampler is usually not accessible to the attacker who wants to fool a model with randomized smoothing. However, our goal in this section is not to reproduce the attack with similar setup in practice, instead, what we are after is the point on the boundary. We therefore do the noise sampling prior to run PGD attack, and we use the same noise across all the instances. The steps are listed as follows:
1. We use numpy.random.randn as the sampler for Gaussian noise with its random seed set to 2020. We use 50 random noises per instance.
2. In PGD attack, we aggregate the gradients of all 50 random inputs before we take a regular step to update the input.
3. We set = 3.0 and we run at most 40 iterations with a step size of 2 ∗ /40.
4. The early stop criteria for the loop of PGD is that when less than 10% of all randomized points have the original prediction.
5. When computing Smooth Gradient for the original points or for the adversarial points, we use the same random noise that we generated to approximate the smoothed classifier.
Results. We run the experiment with 500 images from ImageNet on ResNet50 as this computation is significantly more expensive than previous experiments. We compute the `2 distances between SM and BSM obtained from the steps above for several values as shown in Fig. 11. Notably, the trend of the log difference against the standard deviation σ used for the Gaussian noise validates that the qualitative meaning of Theorem 1 holds even for large networks. That is, when the model becomes more smoothed, saliency map explanation is a good approximation for the boundary-based saliency map.
C SYMMETRY OF ATTRIBUTION METHODS
Sundararajan et al. (2017) prove that a linear path is the only path integral that satisifes symmetry; that is, when two features’ orders are changed for a network that is not using any order information from the input, their attribution scores should not change. One simple way to show the importance of symmetry by the following example and we refer Sundararajan et al. (2017) to readers for more analysis.
Example 1 Consider a function f(x, y) = min(x, y) and to attribute the output of f to the inputs at x = 1, y = 1 we consider a baseline x = 0, y = 0. An example non-linear path from the baseline to the input can be (x = 0, y = 0) → (x = 1, y = 0) → (x = 1, y = 1). On this path, f(x, y) = min(x, y) = y after the point (x = 1, y = 0); therefore, gradient integral will return 0 for the attribution score of x and 1 for y (we ignore the infinitesimal part of (x = 0, y = 0)→ (x = 1, y = 0)). Similarly, when choosing a path (x = 0, y = 0) → (x = 0, y = 1) → (x = 1, y = 1), we find x is more important. Only the linear path will return 1 for both variables in this case.
D COUNTERFACTUAL ANALYSIS IN THE BASELINE SELECTION
The discussion in Sec. 6 shows an example where there are two dogs in the image. IG with black baseline shows that the body of the white dog is also useful to the model to predict its label and the black dog is a mix: part of the black dog has positive attributions and the rest is negatively contribute to the prediction. However, our proposed method BIG clearly shows that the most important part is the black dog and then comes to the white dog. To validate where the model is actually using the white dog, we manually remove the black dog or the white dog from the image and see if the model retain its prediction. The result is shown in Fig. 12. Clearly, when removing the black dog, the model changes its prediction from Labrador retriever to English foxhound while removing the white dog does not change the prediction. This result helps to convince the reader that
BIG is more reliable than IG with black baseline in this case as a more faithful explanation to the classification result for this instance.
E ADDITIONAL VISUALIZATIONS FOR BIG
More visualizations comparing BIG with other attributions can be found in Fig. 16 and 17. We show several examples in Fig. 18 when there are more than one objects in the input and we explain the model’s Top1 prediction, where we show that BIG is able to localize the objects that are actually relevant to the predicted label. | 1. What are the main contributions of the paper regarding interpretable attributions for adversarial robust models?
2. How does the paper explain the reasoning behind interpretable attributions for robust models?
3. What are the new attribution methods devised by the paper, and how do they compare to existing methods like AGI?
4. What are the strengths and weaknesses of the paper's experimental analysis?
5. How does the paper's approach differ from or build upon prior works in attribution methods, such as [1], [2], and [3]? | Summary Of The Paper
Review | Summary Of The Paper
The paper has two main contributions.
a) First it shows that one reason behind the attributions being more interpretable for adversarial robust models is that for these models, the gradient with respect to the input is more closely aligned with the normal direction to a close decision boundary. They empirically verify this claim, by showing that the l2-distance between attributions and their boundary variants(attributions computed at a close point on the decision boundary) are lower for robust models than for standard models.
b) Using the previous fact, they devise two new attribution methods, BSM and BIG which can be used to get more interpretability/explanation from even a normal (non-robust) model. They again verify this claim empirically through various quantitative metrics aimed at finding the relation between positive attributions inside a localized bounding box of an object in the image.
Review
Strengths -
The motivation of the idea is well explained in the paper.
The mathematical foundation required for understanding is also well explained.
I like the effort put in the paper in understanding the reasoning behind interpretable attributions for robust models and then using the info to devise new attribution methods.
For both claims, the paper does extensive qualitative and quantitative experiments.
Weakness -
The new attributions devised in the paper seem very similar to the AGI attribution(mentioned in the paper) approach. In BIG, the attributions are computed along interpolations of x and its closest adversarial image, whereas in AGI the attributions are computed along each step of the adversarial image generation.
In Table1 mentioned in the paper, the improvements along the two metrics used in other papers are not really significant. The improvement only comes along with the two new metrics proposed in this paper. I would like to see a comparison against some other metrics used in the related works such as top-1 localization accuracy as used in [1] and [2].
For a fairer comparison with the AGI method, can the authors use only the PGD attack for the adversarial image generation? Or, the authors can also incorporate other adversarial images (and not just PGD) in AGI. For instance, the AGI method can be used to compute the attributions along each step of PGD, CW, and AutoPGD attacks and the final attribution is just the mean attribution of all three approaches.
[3] showed that their attribution technique works well with even multiple objects in the image. Can the authors show some qualitative results of comparison for multiple objects across different attribution methods.
References -
[1] Attention-based Dropout Layer for Weakly Supervised Object Localization. Choe et al. 2019. [2] On The Benefits Of Models With Perceptually Aligned Gradients. Aggarwal et al. 2020. [3] Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks. Wang et al. 2020. |
ICLR | Title
Robust Models Are More Interpretable Because Attributions Look Normal
Abstract
Recent work has found that adversarially-robust deep networks used for image classification are more interpretable: their feature attributions tend to be sharper, and are more concentrated on the objects associated with the image’s groundtruth class. We show that smooth decision boundaries play an important role in this enhanced interpretability, as the model’s input gradients around data points will more closely align with boundaries’ normal vectors when they are smooth. Thus, because robust models have smoother boundaries, the results of gradientbased attribution methods, like Integrated Gradients and DeepLift, will capture more accurate information about nearby decision boundaries. This understanding of robust interpretability leads to our second contribution: boundary attributions, which aggregate information about the normal vectors of local decision boundaries to explain a classification outcome. We show that by leveraging the key factors underpinning robust interpretability, boundary attributions produce sharper, more concentrated visual explanations—even on non-robust models.
1 INTRODUCTION
Feature attribution methods are widely used to explain the predictions of neural networks (Binder et al., 2016; Dhamdhere et al., 2019; Fong & Vedaldi, 2017; Leino et al., 2018; Montavon et al., 2015; Selvaraju et al., 2017; Shrikumar et al., 2017; Simonyan et al., 2013; Smilkov et al., 2017; Springenberg et al., 2014; Sundararajan et al., 2017). By assigning an importance score to each input feature of the model, these techniques help to focus attention on parts of the data most responsible for the model’s observed behavior. Recent work (Croce et al., 2019; Etmann et al., 2019) has observed that feature attributions in adversarially-robust image models, when visualized, tend to be more interpretable—the attributions correspond more clearly to the discriminative portions of the input.
One way to explain the observation relies on the fact that robust models do not make use of nonrobust features (Ilyas et al., 2019) whose statistical meaning can change with small, imperceptible changes in the source data. Thus, by using only robust features to predict, these models naturally tend to line up with visibly-relevant portions of the image. Etmann et al. take a different approach, showing that the gradients of robust models’ outputs more closely align with their inputs, which explains why attributions on image models are more visually interpretable.
In this paper, we build on this geometric understanding of robust interpretability. With both analytical (Sec. 3) and empirical (Sec. 5) results, we show that the gradient of the model with respect to its input, which is the basic building block of all gradient-based attribution methods, tends to be more closely aligned with the normal vector of a nearby decision boundary in robust models than in “normal” models. Leveraging this understanding, we propose Boundary-based Saliency Map (BSM) and Boundary-based Integrated Gradient (BIG), two variants of boundary attributions (Sec. 4), which base attributions on information about nearby decision boundaries (see an illustration in Fig. 1a). While BSM provides theoretical guarantees in the closed-form, BIG generates both quantitatively and qualitatively better explanations. We show that these methods satisfy several desireable formal properties, and that even on non-robust models, the resulting attributions are more focused (Fig. 1b) and less sensitive to the “baseline” parameters required by some attribution methods.
To summarize, our main contributions are as follows. (1) We present an analysis that sheds light on the previously-observed phenomeon of robust interpretability showing that alignment between the normal vectors of decision boundaries and models’ gradients is a key ingredient (Proposition 1).
In particular, we derive a closed-form result for one-layer networks (Theorem 1) and empirically validate the take-away of our theorem generalizes to deeper networks. (2) Motivated by our analysis, we introduce boundary attributions, which leverage the connection between boundary normal vectors and gradients to yield explanations for non-robust models that carry over many of the favorable properties that have been observed of explanations on robust models. (3) We empirically demonstrate that one such type of boundary attribution, called Boundary-based Integrated Gradients (BIG), produces explanations that are more accurate than prior attribution methods (relative to ground-truth bounding box information), while mitigating the problem of baseline sensitivity that is known to impact applications of Integrated Gradients Sundararajan et al. (2017) (Section 6).
2 BACKGROUND
We begin by introducing our notations. Throughout the paper we use italicized symbols x to denote scalar quantities and bold-face x to denote vectors. We consider neural networks with ReLU as activations prior to the top layer, and a softmax activation at the top. The predicted label for a given input x is given by F (x) = argmaxc fc(x),x ∈ Rd, where F (x) is the predicted label and fi(x) is the output on the class i. As the softmax layer does not change the ranking of neurons in the top layer, we will assume that fi(x) denotes the pre-softmax score. Unless otherwise noted, we use ||x|| to denote the `2 norm of x, and the `2 neighborhood centered at x with radius as B(x, ).
Explainability. Feature attribution methods are widely-used to explain the predictions made by DNNs, by assigning importance scores for the network’s output to each input feature. Conventionally, scores with greater magnitude indicate that the corresponding feature was more relevant to the predicted outcome. We denote feature attributions by z = g(x, f), z,x ∈ Rd. When f is clear from the context, we simply write g(x). While there is an extensive and growing literature on attribution methods, our analysis will focus closely on the popular gradient-based methods, Saliency Map (Simonyan et al., 2013), Integrated Gradient (Sundararajan et al., 2017) and Smooth Gradient (Smilkov et al., 2017), shown in Defs 1-3.
Definition 1 (Saliency Map (SM)) The Saliency Map gS(x) is given by gS(x) := ∂f(x)∂x .
Definition 2 (Integrated Gradient (IG)) Given a baseline input xb, the Integrated Gradient gIG(x;xb) is given by gIG(x;xb) := (x− xb) ∫ 1 0 ∂f((x−xb)t+xb) ∂x dt.
Under review as a conference paper at ICLR 2022
#$
#$
%
%
!! !"
"
#$ %
"!
""
"#
Definition 3 (Smooth Gradient (SG)) Given a zero-centered Gaussian distributionN with a standard deviation σ, the Smooth Gradient gSG(x;σ) is given by gSG(x;σ) := E ∼N (0,σ2I) ∂f(α+ )∂x .
Besides, we will also include results from DeepLIFT (Shrikumar et al., 2017) and grad × input (element-wise multiplication between Saliency Map and the input) (Simonyan et al., 2013) in our empirical evaluation. As we show in Section 3.2, Defs 1-3 satisfy axioms that relate to the local linearity of ReLU networks, and in the case of randomized smoothing (Cohen et al., 2019), their robustness to input perturbations. We further discuss these methods relative to others in Sec. 7.
Robustness. Two relevant concepts about adversarial robustness will be used in this paper: prediction robustness that the model’s output label remains unchanged within a particular `p norm ball and attribution robustness that the feature attributions are similar within the same ball. Recent work has identified the model’s Lipschitz continuity as a bridge between these two concepts (Wang et al., 2020c) and some loss functions in achieving prediction robustness also bring attribution robustness (Chalasani et al., 2020). We refer to robustness as prediction robustness if not otherwise noted.
3 EXPLAINABILITY, DECISION BOUNDARIES, AND ROBUSTNESS
In this section, we begin by discussing the role of decision boundaries in constructing explanations of model behavior via feature attributions. We first illustrate the key relationships in the simpler case of linear models, which contain exactly one boundary, and then generalize to piecewise-linear classifiers as they are embodied by deep ReLU networks. We then show how local robustness causes attribution methods to align more closely with nearby decision boundaries, leading to explanations that better reflect these relationships.
3.1 ATTRIBUTIONS FOR LINEAR MODELS
Consider a binary classifier C(x) = sign(w>x + b) that predicts a label {−1, 1} (ignoring “tie” cases where C(x) = 0, which can be broken arbitrarily). In its feature space, C(x) is a hyperplane H that separates the input space into two open half-spaces S1 and S2 (see Fig. 2a). Accordingly, the normal vector n̂ of the decision boundary is the only vector that faithfully explains the model’s classification while other vectors, while they may describe directions that lead to positive changes in the model’s output score, are not faithful in this sense (see v in Fig. 2a for an example). In practice, to assign attributions for predictions made by C, SM, SG, and the integral part of IG (see Sec. 2) return a vector characterized by z = k1n̂ + k2 (Ancona et al., 2018), where k1 6= 0 and k2 ∈ R, regardless of the input x that is being explained. In other words, these methods all measure the importance of features by characterizing the model’s decision boundary, and are equivalent up to the scale and position of n̂.
3.2 GENERALIZING TO PIECEWISE-LINEAR BOUNDARIES
In the case of a piecewise-linear model, such as a ReLU network, the decision boundaries comprise a collection of hyperplane segments that partition the feature space, as in H1, H2 and H3 in the example shown in Figure 2b. Because the boundary no longer has a single well-defined normal, one intuitive way to extend the relationship between boundaries and attributions developed in the previous section is to capture the normal vector of the closest decision boundary to the input being explained. However, as we show in this section, the methods that succeeded in the case of linear models (SM, SG, and the integral part of IG) may in fact fail to return such attributions in the more general case of piecewise-linear models, but local robustness often remedies this problem. We begin by reviewing key elements of the geometry of ReLU networks (Jordan et al., 2019).
ReLU activation polytopes. For a neuron u in a ReLU network f(x), we say that its status is ON if its pre-activation u(x) ≥ 0, otherwise it is OFF. We can associate an activation pattern denoting the status of each neuron for any point x in the feature space, and a half-space Au to the activation constraint u(x) ≥ 0. Thus, for any point x the intersection of the half-spaces corresponding to its activation pattern defines a polytope P (see Fig. 2b), and within P the network is a linear function such that ∀x ∈ P, f(x) = w>Px + bP , where the parameters wp and bP can be computed by differentiation (Fromherz et al., 2021). Each facet of P (dashed lines in Fig. 2b) corresponds to a boundary that “flips” the status of its corresponding neuron. Similar to activation constraints, decision boundaries are piecewise-linear because each decision boundary corresponds to a constraint fi(x) ≥ fj(x) for two classes i, j (Fromherz et al., 2021; Jordan et al., 2019). Gradients might fail. Saliency maps, which we take to be simply the gradient of the model with respect to its input, can thus be seen as a way to project an input onto a decision boundary. That is, a saliency map is a vector that is normal to a nearby decision boundary segment. However, as others have noted, a saliency map is not always normal to any real boundary segment in the model’s geometry (see the left plot of Fig. 2c), because when the closest boundary segment is not within the activation polytope containing x, the saliency map will instead be normal to the linear extension of some other hyperplane segment (Fromherz et al., 2021). In fact, iterative gradient descent typically outperforms the Fast Gradient Sign Method (Goodfellow et al., 2015) as an attack demonstrates that this is often the case.
When gradients succeed. While saliency maps may not be the best approach in general for capturing information about nearby segments of the model’s decision boundary, there are cases in which it serves as a good approximation. Recent work has proposed using the Lipschitz continuity of an attribution method to characterize the difference between the attributions of an input x and its neighbors within a `p ball neighborhood (Def. 4) (Wang et al., 2020c). This naturally leads to Proposition 1, which states that the difference between the saliency map at an input and the correct normal to the closest boundary segment is bounded by the distance to that segment.
Definition 4 (Attribution Robustness) An attribution method g(x) is (λ, δ)-locally robust at the evaluated point x if ∀x′ ∈ B(x, δ), ||g(x′)− g(x)|| ≤ λ||x′ − x||.
Proposition 1 Suppose that f has a (λ, δ)-robust saliency map gS at x, x′ is the closest point on the closest decision boundary segment to x and ||x′ − x|| ≤ δ, and that n is the normal vector of that boundary segment. Then ||n− gS(x)|| ≤ λ||x− x′||.
Proposition 1 therefore provides the following insight: for networks that admit robust attributions (Chen et al., 2019; Wang et al., 2020c), the saliency map is a good approximation to the boundary vector. As prior work has demonstrated the close correspondence between robust prediction and robust attributions (Wang et al., 2020c; Chalasani et al., 2020), this in turn suggests that explanations on robust models will more closely resemble boundary normals.
As training robust models can be expensive, and may not come with guarantees of robustness, post-processing techniques like randomized smoothing (Cohen et al., 2019), have been proposed as an alternative. Dombrowski et al. (2019) noted that models with softplus activations (y = 1/β log(1+exp (βx))) approximate smoothing, and in fact give an exact correspondence for singlelayer networks. Combining these insights, we arrive at Theorem 1, which suggests that the saliency map on a smoothed model approximates the closest boundary normal vector well; the similarity is inversely proportional to the standard deviation of the noise used to smooth the model.
Theorem 1 Let m(x) = ReLU(Wx) be a one-layer network and when using randomized smoothing, we writemσ(x). Let g(x) be the SM formσ(x) and suppose ∀x′′ ∈ B(x, ||x−x′||), ||g(x′′)|| ≥ c where x′ is the closest adversarial example, we have the following statement holds: ||g(x) − g(x′)|| ≤ λ where λ is monotonically decreasing w.r.t σ.
Theorem 1 suggests that when randomized smoothing is used, the normal vector of the closest decision boundary segment and the saliency map are similar, and this similarity increases with the smoothness of the model’s boundaries. We think the analytical form for deeper networks exists but its expression might be unnecessarily complex due that we need to recursively apply ReLU before computing the integral (i.e., the expectation). The analytical result above for one-layer network and empirical validations for deeper nets in Figure 11, if taken together, shows that attributions and boundary-based attributions are more similar in a smoothed model.
4 BOUNDARY-BASED ATTRIBUTION
Without the properties introduced by robust learning or randomized smoothing, the local gradient, i.e. saliency map, may not be a good approximation of decision boundaries. In this section, we build on the insights of our analysis to present a set of novel attribution methods that explicitly incorporate the normal vectors of nearby boundary segments. Importantly, these attribution methods can be applied to models that are not necessarily robust, to derive explanations that capture many of the beneficial properties of explanations for robust models.
Using the normal vector of the closest decision boundary to explain a classifier naturally leads to Definition 5, which defines attributions directly from the normal of the closest decision boundary.
Definition 5 (Boundary-based Saliency Map (BSM)) Given f and an input x, we define Boundary-based Saliency MapBS(x) as follows: BS(x) def = ∂fc(x
′)/∂x′, where x′ is the closest adversarial example to x, i.e. c = F (x) 6= F (x′) and ∀xm.||xm−x|| < ||x′−x|| → F (x) = F (xm).
Incorporating More Boundaries. The main limitation of using Definition 5 as a local explanation is obvious: the closest decision boundary only captures one segment of the entire decision surface. Even in a small network, there will be numerous boundary segments in the vicinity of a relevant point. Taking inspiration from Integrated Gradients, Definition 6 proposes the Boundary-based Integrated Gradient (BIG) by aggregating the attributions along a line between the input and its closest boundary segment.
Definition 6 (Boundary-based Integrated Gradient(BIG)) Given f , Integrated Gradient gIG and an input x, we define Boundary-based Integrated Gradient BS(x) as follows: BIG(x) := gIG(x;x′), where x is the nearest adversarial example to x, i.e. c = F (x) 6= F (x′) and ∀xm.||xm − x|| < ||x′ − x|| → F (x) = F (xm).
Geometric View of BIG. BIG explores a linear path from the boundary point to the target point. Because points on this path are likely to traverse different activation polytopes, the gradient of intermediate points used to compute gIG are normals of linear extensions of their local boundaries. As the input gradient is identical within a polytope Pi, the aggregate computed by BIG sums each gradient wi along the path and weights it by the length of the path segment intersecting with Pi. In other words, one may view IG as an exploration of the model’s global geometry that aggregates all boundaries from a fixed reference point, whereas BIG explores the local geometry around x. In the former case, the global exploration may reflect boundaries that are not particularly relevant to model’s observed behavior at a point, whereas the locality of BIG may aggregate boundaries that are more closely related (a visualization is shown in Fig. 1a).
Finding nearby boundaries. Finding the exact closest boundary segment is identical to the problem of certifying local robustness (Fromherz et al., 2021; Jordan et al., 2019; Kolter & Wong, 2018; Lee et al., 2020; Leino et al., 2021b; Tjeng et al., 2019; Weng et al., 2018), which is NP-hard for piecewise-linear models (Sinha et al., 2020). To efficiently find an approximation of the closest boundary segment, we leverage and ensemble techniques for generating adversarial examples, i.e. PGD (Madry et al., 2018), AutoPGD (Croce & Hein, 2020) and CW (Carlini & Wagner, 2017), and use the closest one found given a time budget. The details of our implementation are discussed in Section 5, where we show that this yields good results in practice.
5 EVALUATION
In this section, we first validate that the attribution vectors are more aligned to normal vectors of nearby boundaries in robust models(Fig. 3a). We secondly show that boundary-based attributions provide more “accurate” explanations – attributions highlight features that are actually relevant to the label – both visually (Fig. 4 and 5) and quantitatively (Table 1). Finally, we show that in a standard model, whenever attributions more align with the boundary attributions, they are more “accurate”.
General Setup. We conduct experiments over two data distributions, ImageNet (Russakovsky et al., 2015) and CIFAR-10 (Krizhevsky et al.). For ImageNet, we choose 1500 correctly-classified images from ImageNette (Howard), a subset of ImageNet, with bounding box area less than 80% of the original source image. For CIFAR-10, We use 5000 correctly-classified images. All standard and robust deep classifiers are ResNet50. All weights are pretrained and publicly available (Engstrom et al., 2019). Implementation details of the boundary search (by ensembling the results of PGD, CW and AutoPGD) and the hyperparameters used in our experiments, are included in Appendix B.2.
5.1 ROBUSTNESS→ BOUNDARY ALIGNMENT
In this subsection, we show that SM and IG better align with the normal vectors of the decision boundaries in robust models. For SM, we use BSM as the normal vectors of the nearest decision boundaries and measure the alignment by the `2 distance between SM and BSM following Proposition 1. For IG, we use BIG as the aggregated normal vectors of all nearby boundaries because
IG also incorporates more boundary vectors. Recently, Pan et al. (2021) also provides Adversarial Gradient Integral (AGI) as an alternative way of incorporating the boundary normal vectors into IG. We first use both BIG and AGI to measure how well IG aligns with boundary normals and later compare them in Sec. 5.2, followed by a formal discussion in Sec. 7.
Aggregated results for standard models and robust models are shown in Fig. 3a. It shows that adversarial training with bigger encourages a smaller difference between attributions and their boundary variants. Particularly, using `2 norm and setting = 3.0 are most effective for ImageNet compared to `∞ norm bound. One possible explanation is that the `2 space is special because training with `∞ bound may encourage the gradient to be more Lipschitz in `1 because of the duality between the Lipschitzness and the gradient norm, whereas `2 is its own dual.
5.2 BOUNDARY ATTRIBUTION→ BETTER LOCALIZATION
In this subsection, we show boundary attributions (BSM, BIG and AGI) better localize relevant features. Besides SM, IG and SG, we also focus on other baseline methods including Grad × Input (GTI) (Simonyan et al., 2013) and DeepLIFT (rescale rule only) (Shrikumar et al., 2017) that are reported to be more faithful than other related methods (Adebayo et al., 2018; 2020).
In an image classification task where ground-truth bounding boxes are given, we consider features within a bounding box as more relevant to the label assigned to the image. Our evaluation is performed over ImageNet only because no bounding box is provided for CIFAR-10 data. The metrics used for our evaluation are: 1) Localization (Loc.) (Chattopadhyay et al., 2017) evaluates the intersection of areas with the bounding box and pixels with positive attributions; 2) Energy Game (EG) (Wang et al., 2020a) instead computes the portion of attribute scores within the bounding box. While these two metrics are common in the literature, we propose the following additional metrics: 3)Positive Percentage (PP) evaluates the portion of positive attributions in the bounding box because a naive assumption is all features within bounding boxes are relevant to the label (we will revisit this assumption in Sec. 6); and 4) Concentration (Con.) sums the absolute value of attribution scores over the distance between the “mass” center of attributions and each pixel within the bounding box. Higher Loc., EG, PP and Con. are better results. We provide formal details for the above metrics in Appendix B.1.
We show the average scores for ResNet50 models in Table 1 where the corresponding boxplots can be found in Appendix B.4. BIG is noticeably better than other methods on Loc. EG, PP and Con. scores for both robust and standard models and matches the performance of SG on EG for a standard model. Notice that BSM is not significantly better than others in a standard model, which confirms our motivation of BIG – that we need to incorporate more nearby boundaries because a single boundary may not be sufficient to capture the relevant features.
We also measure the correlation between the alignment of SM and BSM with boundary normals and the localization abilities, respectively. For SM, we use BSM to represent the normal vectors of the boundary. For IG, we use AGI and BIG. For each pair X-Y in {SM-BSM, IG-AGI, IG-BIG}, we measure the empirical correlation coefficient between −||X− Y ||2 and the localization scores of X in a standard ResNet50 and the result is shown in Fig. 3b. Our results suggest that when the attribution methods better align with their boundary variants, they can better localize the relevant features in terms of the Loc. and EG. However, PP and Con. have weak and even negative correlations. One possible explanation is that the high PP and Con. of BIG and AGI compared to IG (as shown in Table 1) may also come from the choice of the reference points. Namely, compared to a zero vector, a reference point on the decision boundary may better filter out noisy features.
We end our evaluations by visually comparing the proposed method, BIG, against all other attribution methods for the standard ResNet50 in Fig. 4 and for the robust ResNet50 in Fig. 5, which demonstrates that BIG can easily and efficiently localize features that are relevant to the prediction. More visualizaitons can be found in the Appendix E.
Summary. Taken together, we close the loop and empirical show that standard attributions in robust models are visually more interpretable because they better capture the nearby decision boundaries. Therefore, the final take-away from our analytical and empirical results is if more resources are devoted to training robust models, effectively identical explanations can be obtained using much less costly standard gradient-based methods, i.e. IG.
6 DISCUSSION
Baseline Sensitivity. It is natural to treat that BIG frees users from the baseline selection in explaining non-linear classifiers. Empirical evidence has shown that IG is sensitive to the baseline inputs (Sturmfels et al., 2020). We compare BIG with IG when using different baseline inputs, white or black images. We show an example in Fig 6b. For the first two images, when using the baseline input as the opposite color of the dog, more pixels on dogs receive non-zero attribution scores. Whereas backgrounds always receive more attribution scores when the baseline input has the same color as the dog. This is because gIG(x)i ∝ (x− xb)i (see Def. 2) that greater differences in the input feature and the baseline feature can lead to high attribution scores. The third example further questions the readers using different baselines in IG whether the network is using the white dog to predict Labrador retriever. We demonstrate that conflicts in IG caused by the sensitivity to the baseline selection can be resolved by BIG. BIG shows that black dog in the last row is more important for predicting Labrador retriever and this conclusion is further validated by our counterfactual experiment in Appendix D. Overall, the above discussion highlights that BIG is significantly better than IG in reducing the non-necessary sensitivity in the baseline selection.
Limitations. We identify two limitations of the work. 1) Bounding boxes are not perfect groundtruth knowledge for attributions. In fact, we find a lot of examples where the bounding boxes either fail to capture all relevant objects or are too big to capture relevant features only. Fixing mislabeled bounding boxes still remain an open question and should benefit more expandability research in general. 2) Our analysis only targets on attributions that are based on end-to-end gradient computations. That is, we are not able to directly characterize the behavior of perturbation-based approaches, i.e. Mask (Fong & Vedaldi, 2017), and activation-based approaches, i.e. GradCAM (Selvaraju et al., 2017) and Feature Visualization (Olah et al., 2017).
7 RELATED WORK
Ilyas et al. (2019) shows an alternative way of explaining why robust models are more interpretable by showing robust models usually learn robust and relevant features, whereas our work serves as a geometrical explanation to the same empirical findings in using attributions to explain deep models. Our analysis suggests we need to capture decision boundaries in order to better explain classifiers,
whereas a similar line of work, AGI (Pan et al., 2021) that also involves computations of adversarial examples is motivated to find a non-linear path that is linear in the representation space instead of the input space compared to IG. Therefore, AGI uses PGD to find the adversarial example and aggregates gradients on the non-linear path generated by the PGD search. We notice that the trajectory of PGD search is usually extremely non-linear, complex and does not guarantee to return closer adversarial examples without CW or AutoPGD (see comparisons between boundary search approaches in Table B.2). We understand that finding the exact closest decision boundary is not feasible, but our empirical results suggest that the linear path (BIG) returns visually sharp and quantitative better results in localizing relevant features. Besides, a non-linear path should cause AGI fail to meet the symmetry axiom (Sundararajan et al., 2017) (see Appendix C for an example of the importance of symmetry for attributions). We further summarize the commons and differences in Table 6a.
In the evaluation of the proposed methods, we choose metrics related to bounding box over other metrics because for classification we are interested in whether the network associate relevant features with the label while other metrics (Adebayo et al., 2018; Ancona et al., 2017; Samek et al., 2016; Wang et al., 2020b; Yeh et al., 2019), e.g. infidelity (Yeh et al., 2019), mainly evaluates whether output scores are faithfully attributed to each feature. Our idea of incorporating boundaries into explanations may generalize to other score attribution methods, e.g. Distributional Influence (Leino et al., 2018) and DeepLIFT (Shrikumar et al., 2017). The idea of using boundaries in the explanation has also been explored by T-CAV (Kim et al., 2018), where a linear decision boundary is learned for the internal activations and associated with their proposed notion of concept.
When viewing our work as using nearby boundaries as a way of exploring the local geometry of the model’s output surface, a related line of work is NeighborhoodSHAP (Ghalebikesabi et al., 2021), a local version of SHAP (Lundberg & Lee, 2017). When viewing our as a different use of adversarial examples, some other work focuses on counterfactual examples (semantically meaningful adversarial examples) on the data manifold (Chang et al., 2019; Dhurandhar et al., 2018; Goyal et al., 2019).
8 CONCLUSION
In summary, we rethink the target question an explanation should answer for a classification task, the important features that the classifier uses to place the input into a specific side of the decision boundary. We find the answer to our question relates to the normal vectors of decision boundaries in the neighborhood and propose BSM and BIG as boundary attribution approaches. Empirical evaluations on STOA classifiers validate that our approaches provide more concentrated, sharper and more accurate explanations than existing approaches. Our idea of leveraging boundaries to explain classifiers connects explanations with the adversarial robustness and help to encourage the community to improve model quality for explanation quality.
A THEOREMS AND PROOFS
A.1 PROOF OF PROPOSITION 1
Proposition 1 Suppose that f has a (λ, δ)-robust saliency map gS at x, x′ is the closest point on the closest decision boundary segment to x and ||x′ − x|| ≤ δ, and that n is the normal vector of that boundary segment. Then ||n− gS(x)|| ≤ λ||x− x′||. To compute n can be efficiently computed by taking the derivatice of the model’s output w.r.t to the point that is on the decision boundary such that n = ∂f(x
′) ∂x′ and ∀xm ∈ R
d, F (xm) = F (x) if ||xm − x|| ≤ ||x′ − x||. Because we assume ||x − x′|| ≤ δ, and the model has (λ, δ)-robust Saliency Map, then by Def. 4 we have
||n− gS(x)|| ≤ λ||x− x′||
A.2 PROOF OF THEOREM 1
Theorem 1 Let m(x) = ReLU(Wx) be a one-layer network and when using randomized smoothing, we write mσ(x). Let g(x) be the SM for mσ(x) and suppose ∀x′′ ∈ B(x, ||x − x′||), ||g(x′′)|| ≥ c where x′ is the closest adversarial example, we have the following statement holds: ||g(x)− g(x′)|| ≤ λ where λ is monotonically decreasing w.r.t σ.
Proof:
We begin our proof by firstly introducing Randomized Smoothing.
Definition 7 (Randomized Smoothing (Cohen et al., 2019)) Suppose F (x) = argmaxc fc(x), the smoothed classifier G(x) is defined as
G(x) := argmax c Pr [F (x+ ) = c] (1)
where ∼ N (0, σ2I)
Now the rest of the proof of is three-fold: 1) firstly we will show that there exist a non-linear activation function Er(x) such that the output of the smoothed ReLU network mσ(x) is equivalent when replacing the ReLU activation with Er activation; 2) secondly derive the difference between the saliency map of the network with Er activation; and 3) lastly, we show that the difference between SM and BSM of the network with Er activation is bounded, which is inversely proportional to the standard deviation used to create the smoothed ReLU network mσ(x).
(1) Step I: Error activation (Er) function and randomized smoothing. 1
Randomized Smoothing creates a smoothed model that returns whichever the label that the base classifier most likely to return under the perturbation generated by the Gaussian noise. Now we take a look at the output of each class under the Gaussian noise. Consider yi is the output of the i-th class of the network ReLU(Wx), that is
yi = E ∼N (0,σ2I)ReLU(w>i (x+ )) (2)
To simplify the notation, we denote E ∼N (0,σ2I) as E. We expand Equation (2):
yi = E [ ReLU(w>i x+w > i ) ] = E [ReLU(u+ ′)] (3)
where we denote u = w>i x and ′ = w>i . u is a scalar and ′ follows a zero-centered univariate Gaussian with the standard deviation s ∝ σ because the dot production between the constant weight vector wi and the random vector can be seen as a linear combination of each dimension of and the covariance between each dimension of is 0 for the Gaussian noise used for randomized smoothing
1We appreciate the discussion with the author Pan Kessel of Dombrowski et al. (2019) for the derivations from Equation (6) to (7)
in the literature (Cohen et al., 2019). By expanding the expectation symbol to its integral form, we obtain:
yi = 1
s √ 2π ∫ ∞ −∞ exp(− ′2 2s2 )ReLU(u+ ′)d ′ (4)
Let τ = u+ ′ and notice that ReLU(τ) = 0 if τ < 0, the equation above can be rewritten as:
yi = 1
s √ 2π ∫ ∞ 0 exp(− (τ − u) 2 2s2 )τdτ (5)
= 1√ 2π exp(− u 2 2s2 )s+ u 2
[ 1 + Erf(
u√ 2s )
] (6)
(7)
where Erf is the error function defined as Erf(x) = 2√ π ∫ x 0 exp(−t2)dt. We therefore define an Er activation for an input u with the standard deviation s as
Er(u; s) = 1√ 2π exp(− u 2 2s2 )s+ u 2
[ 1 + Erf(
u√ 2s )
] (8)
and we show that
yi = E ∼N (0,σ2I) [ ReLU(w>i (x+ )) ] = Er(w>i x; s(σ)) (9)
That is, to analyze the gradient of the output for a smoothed model w.r.t the input, we can alternatively analyze the gradient of an equivalent Er network. We plot three examples of the Er activations in Fig. 7 for the readers to see what does the function look like.
2) Step II: the Saliency Map for an Er network.
By the definition of Saliency map (Def. 1) and the chain rule, we have:
SM(x) = ∂yi ∂x = ∂yi ∂u ∂u ∂x (Let u = w>i x) (10)
= ∂
∂u (Er(u; s)) ·wi (11)
= 1
2
[ 1 + Erf(
u√ 2s )
] ·wi (12)
The transition between Equation (11) to (12) is based on the fact that the derivative of Erf(x) is 2√ π exp(−x2).
3) Step III: the difference between SM and BSM for an Er network.
Let x′ be the closest point on the decision boundary for the smoothed classifiermσ and ||x−x′|| = r (for the closed-form expression of r, see Cohen et al. (2019)). Based on the definition of BSM, we have
BSM(x) = ∂yi(x
′)
∂x′ =
1
2
[ 1 + Erf(
u′√ 2s )
] ·wi, u′ = w>i x′ (13)
The difference between SM and BSM therefore is computed as
||BSM(x)− SM(x)|| = ||1 2
[ 1 + Erf(
u′√ 2s )
] ·wi − 1
2
[ 1 + Erf(
u√ 2s )
] ·wi|| (14)
= 1 2 |Erf( u ′ √ 2s )− Erf( u√ 2s )| · ||wi|| (15)
≤ 1 2
[ |Erf( u
′ √ 2s )|+ |Erf( u√ 2s
)| ] · ||wi|| (Triangle Inequality) (16)
We notice that the u′ is bounded because u′ = w>i x ′ ≤ ||wi|| · ||x′|| ≤ ||wi|| · (||wi|| + r) and similarly for u such that u = w>i x ≤ ||wi|| · (||x||+ r). Because Erf function is increasing w.r.t the input and s > 0, we arrive at the following inequality:
||BSM(x)− SM(x)|| ≤ λ (17)
where
λ = Erf( ||wi|| · (||x||+ r)√
2s ) · ||wi|| (18)
We take the absolute symbol out because the output of an Erf is positive when its input is positive. Now, given that ||wi||, r and ||x|| are constants when for a given input x, the upper-bound Erf( ||wi||·(||x||+r)√
2s ) · ||wi|| is monotonically increasing as s decreases. From the Step I, we know
that s ∝ σ, therefore we prove there exist an upper-bound λ of the difference between the SM and BSM for a smoothed classifier and λ is monotonically decreasing w.r.t the standard deviation of the Gaussian noise.
B EXPERIMENT DETAILS AND ADDITIONAL RESULTS
B.1 METRICS WITH BOUNDING BOXES
We will use the following extra notations in this section. Let X , Z and U be a set of indices of all pixels, a set of indices of pixels with positive attributions, and a set of indices of pixels inside the bounding box for a target attribution map g(x). We denote the cardinality of a set S as |S|.
Localization (Loc.) (Chattopadhyay et al., 2017) evaluates the intersection of areas with the bounding box and pixels with positive attributions.
Definition 8 (Localization) For a given attribution map g(x), the localization score (Loc.) is defined as
Loc := |Z ∩ U |
|U |+ |Z ∩ (X \ U)| (19)
Energy Game (EG) (Wang et al., 2020a) instead evaluates computes the portion of attribute scores within the bounding box.
Definition 9 (Energy Game) For a given attribution map g(x), the energy game EG is defined as
EG := ∑ i∈Z∩U g(x)i∑
i∈X max(g(x)i, 0) (20)
Positive Percentage (PP) evaluates the sum of positive attribute scores over the total (absolute value of) attribute scores within the bounding box.
Definition 10 (Positive Percentage) Let V be a set of indices pf all pixels with negative attribution scores, for a given attribution map g(x), the positive percentage PP is defined as
PP := ∑ i∈Z∩U g(x)i∑
i∈Z∩U g(x)i − ∑ i∈V ∩U g(x)i
(21)
Concentration (Con.) evaluates the sum of weighted distances by the “mass” between the “mass” center of attributions and each pixel within the bounding box. Notice that the computation of cx and cy can be computed with scipy.ndimage.center of mass. This definition encourages that pixels with high absolute value of attribution scores to be closer to the mass center.
Definition 11 (Concentration) For a given attribution map g(x), the concentration Con. is defined as follws
Con. := ∑ i∈U ĝ(x)i/ √ (ix − cx)2 + (iy − cy)2 (22)
where ĝ is the normalized attribution map so that ĝi = gi/ ∑ i∈U |gi|. ix, iy are the coordinates of the pixel and
cx = ∑ i∈U ixĝ(x)i∑ i∈U ĝ(x)i , cy = ∑ i∈U iy ĝ(x)i∑ i∈U ĝ(x)i
(23)
Besides metrics related to bounding boxes, there are other metrics in the literature used to evaluate attribution methods (Adebayo et al., 2018; Ancona et al., 2017; Samek et al., 2016; Wang et al., 2020b; Yeh et al., 2019). We focus on metrics that use provided bounding boxes, as we believe that they offer a clear distinction between likely relevant features and irrelevant ones.
B.2 IMPLEMENTING BOUNDARY SEARCH
Our boundary search uses a pipeline of PGDs, CW and AutoPGD. Adversarial examples returned by each method are compared with others and closer ones are returned. If an adversarial example is not found, the pipeline will return the point from the last iteration of the first method (PGDs in our case). Hyper-parameters for each attack can be found in Table 2. The implementation of PGDs and CW are based on Foolbox (Rauber et al., 2020; 2017) and the implementation of AutoPGD is based
on the authors’ public repository2 (we only use apgd-ce and apgd-dlr losses for efficiency reasons). All computations are done using a GPU accelerator Titan RTX with a memory size of 24 GB. Comparisons on the results of the ensemble of these three approaches are shown in Fig. 10a.
B.3 HYPER-PARAMETERS FOR ATTRIBUTION METHODS
All attributions are implemented with Captum (Kokhlikyan et al., 2020) and visualized with Trulens (Leino et al., 2021a). For BIG and IG, we use 20 intermediate points between the baseline and the input and the interpolation method is set to riemann trapezoid. For AGI, we base on the authors’ public repository3. The choice of hyper-paramters follow the default choice from the authors for ImageNet and we make minimal changes to adapt them to CIFAR-10 (see Fig. 10b).
To visualize the attribution map, we use the HeatmapVisualizer with blur=10, normalization type="signed max" and default values for other keyword arguments from Trulens.
B.4 DETAILED RESULTS ON LOCALIZATION METRICS
We show the average scores for each localizaiton metrics in Sec. 5. We also show the boxplots of the scores for each localization metrics in Fig. 8 for the standard ResNet50 model and Fig. 9 for the robust ResNet50 (`2|3.0). All higher scores are better results.
2https://github.com/fra31/auto-attack 3https://github.com/pd90506/AGI
B.5 ADDITIONAL COMPARISON WITH AGI
We additionally compare the localization ability of relevant features between BIG and AGI if we only use PGDs to return closest boundary points, that is we recursively increase the norm bound and perform PGD attack until the first time we succeed to find an adversarial point. We denote this approach as BIGp. Note that BIGp is still different from AGI by the type of the path, i.e. lines
and curves, over which the integral is performed. That is AGI also aggregates the path integral starting from a set of adversarial points found by the targeted PGD attack, where BIGp starts from the adversarial pointed returned by untargeted PGD attack. We use the same parameters for both PGD and AGI from Fig. 2 and we run the experiments over the same dataset used in Sec. 5.1. For reference, we also include the results of IG. The results are shown in Table. 3. We notice that after removing CW and AutoPGD, BIGp actually performs better than AGI, and even slightly better than BIG for the robust model. One reason to explain the tiny improvement from BIGp might be that for a robust network, the gradient at each iteration of the PGD attack is more informative and less noisy compared to a standard model so that the attack can better approximate the closest decision boundary. The results in Table. 3 therefore demonstrates that BIG and BIGp are able to localize relevant features better than AGI.
B.6 ADDITIONAL LOCALIZATION METRIC
Besides the localization metrics used in Sec. 5.1, we discuss an additional localization metric frequently used for evaluating attention and CAM-based explanations: Top1-Loc Choe & Shim (2019); Aggarwal et al. (2020). Top1-Loc is calculated as follows: an instance is considered as Top1-Loc correct given an attribution if 1) the prediction is Top1-correct; and 2) GT-Loc correct – namely, the IoU of the ground-truth bounding box and area highlighted by the attribution is more than 50 %. When only using the images that are Top1-correct, Top1-Loc reduces to GT-Loc. Top1-Loc is different from other localization metrics used for evaluating attribution methods because it takes the prediction behavior of the target model into the account, which in general is not an axiom when motivating a gradient-based attribution method. In the previous evaluations, we are only interested in images that the model makes correct Top1 predictions, in this section we will use the same images that are true-positives. In this case, Top1-Loc accuracy reduces to GT-Loc accuracy, and so we measure the GT-Loc directly. To determine the which part of the image is highlighted by the attribution, we compute a threshold for each attribution map and a pixel is considered within the highlight region if and only if the attribution score is higher than the threshold. For a given attribution map, we consider a threshold value t as the q-th percentile for the absolute values of attribution scores. We plot the GT-Loc accuracy against q in Fig. 13. We notice that attention-based and CAM-based attributions usually produce a cloud-like visualization because of the blurring technique or upsample layers used to compute the results. To ensure GT-Loc will work from gradient-based attributions we are interested in this paper, we also include results (Fig. 14) where we apply a Gaussian Blur (σ = 3.0) to the attribution map first before calculating the GT-Loc accuracy. The results are aggreated over 1500 images from ImageNette on a standard ResNet50 and a robust ResNet50, respectively. Higher GT-Loc scores are better.
Behavior of BIG. The results in Fig. 13 and 14 show that BIG is better than all other attributions on standard models excluding SG and uniformly better including SG on a robust model. Before we provide some explanations about the behaviors of SG (green curves) on standard models in the next paragraph, we also observe that: 1) blurring only changes the GT-Loc scores but not the overral rankings across attributions; 2) a threshold corresponding to a percentile near 40% provides the best GT-Loc scores for all methods; 3) gradient-based attributions generally provide worse GT-Loc (or Top1-Loc) scores compared to CAM-based and attention-based approaches in the literature Choe &
Shim (2019); Aggarwal et al. (2020), which is not surprising because gradient-based approaches are usually axiomatically-justified to be faithful to the model. Therefore, it is expected that the model will more or less learn spurious features from the input, which makes the gradient-based attributions noisy than attention and CAM-based ones. Therefore, when localizing relevant features, users may want to consult activation-based approaches, i.e. CAMs, but when debugging and ensuring the network learns less spurious and irrelevant features, users should instead use gradient-based approaches because of the axioms behind these approaches.
Behavior of SG in Standard Models. SG is uniformly better than all other approaches in terms of the Gt-Loc accuracies on a standard model, which is surprising but not totally unexpected. We beleive the reason behind this result is that, SG is actually the gradient from a smoothed counterpart of the standard model (see discussions near Theorem 1), which is more robust. Therefore, it does not seem to be an apple-to-apple comparison between SG and other approaches because it can be less faithful to the standard model – namely SG is more faithful to the smoothed classifier. That is very likely why SG is worse than BIG in Fig. 13b and 14b when the smoothing technique becomes marginal for improving the robustness for a model that has already been robustly trained.
B.7 SANITY CHECK FOR BIG
We perform Sanity Checks for BIG using Rank Order Correlations between the absolute values of BIGs when randomizing the weights from the top layer to the bottom (Adebayo et al., 2018). To ensure the output of the model does not become NaN, when randomizing the weights of each trainable layer, we ensure that we replace the weight matrix with a random matrix with the same norm as follows.
1 def _randomized_models(): 2 all_parameters = [] 3 for param in model.parameters(): 4 all_parameters.append(param) 5 for step, param in enumerate(all_parameters[::-1]): 6 random_w = torch.randn_like(param) 7 ## we make sure the randomized weights have the same norm to prevent the network to output nan results 8 param.data = torch.nn.parameter.Parameter( 9 random_w * torch.norm(param.data) / torch.norm(random_w.data))
10 if step % num_blocks == 0 or step == len(all_parameters): 11 yield model 12
For each iteration, we continuously replace randomized 5 layers in the reversed sequence returned by model.parameters() and the results are plotted in Fig. 15. We consider BIG passes the sanity check as the results are similar compared with the top row of Fig 4 in Adebayo et al. (2018).
B.8 ADDITIONAL EXPERIMENT WITH SMOOTHED GRADIENT
Theorem 1 demonstrates that for a one-layer network, as we increase the standard deviation σ of the Gaussian distribution used for creating the smoothed model mσ (Cohen et al., 2019), the difference between the saliency map and the boundary-based saliency map computed in mσ is bounded by a constant λ, which is monotonically decreasing w.r.t σ. That is, greater σ will produce a smoothed model, where the saliency map (SM) explanation of mσ is a good approximation for the boundarybased saliency map (BSM). However, as the depth of the deep network increases, a closed-form analysis may be difficult to derive. Therefore, in this section, we aim to empirically validate that the take-away from Theorem 1 should generalize to deeper networks.
Computing SM for mσ . One practical issue to compute any gradient-related explanations for the smoothed model mσ is that mσ is defined in an integral form, which can not be directly built with tf.keras. However, Theorem 2 shows that the smoothed gradient of the original model m is equivalent to the saliency map of the smoothed model mσ . Namely, the order of smoothing and integral is exchangeable when computing the gradient.
Theorem 2 (Proposition 1 from Wang et al. (2020c)) Suppose a model f(x) satisfies max |f(x)| <∞. For Smoothed Gradient gSG(x), we have
gSG(x) = ∂(f ~ q)(x)
∂x (24)
where q(x) = N (0, σ2I) and ~ denotes the convolution operation.
Computing BSM for mσ . Another practical issue is computing the decision boundary for a smoothed modelmσ is not defined in a deterministic way as randomized smoothing provides a probabilistic guarantee. In this paper, we do the following steps to approximate the decision boundary of a smoothed model. To generate the adversarial examples for the smoothed classifier of ResNet50 with randomized smoothing, we need to compute back-propagation through the noises. The noise sampler is usually not accessible to the attacker who wants to fool a model with randomized smoothing. However, our goal in this section is not to reproduce the attack with similar setup in practice, instead, what we are after is the point on the boundary. We therefore do the noise sampling prior to run PGD attack, and we use the same noise across all the instances. The steps are listed as follows:
1. We use numpy.random.randn as the sampler for Gaussian noise with its random seed set to 2020. We use 50 random noises per instance.
2. In PGD attack, we aggregate the gradients of all 50 random inputs before we take a regular step to update the input.
3. We set = 3.0 and we run at most 40 iterations with a step size of 2 ∗ /40.
4. The early stop criteria for the loop of PGD is that when less than 10% of all randomized points have the original prediction.
5. When computing Smooth Gradient for the original points or for the adversarial points, we use the same random noise that we generated to approximate the smoothed classifier.
Results. We run the experiment with 500 images from ImageNet on ResNet50 as this computation is significantly more expensive than previous experiments. We compute the `2 distances between SM and BSM obtained from the steps above for several values as shown in Fig. 11. Notably, the trend of the log difference against the standard deviation σ used for the Gaussian noise validates that the qualitative meaning of Theorem 1 holds even for large networks. That is, when the model becomes more smoothed, saliency map explanation is a good approximation for the boundary-based saliency map.
C SYMMETRY OF ATTRIBUTION METHODS
Sundararajan et al. (2017) prove that a linear path is the only path integral that satisifes symmetry; that is, when two features’ orders are changed for a network that is not using any order information from the input, their attribution scores should not change. One simple way to show the importance of symmetry by the following example and we refer Sundararajan et al. (2017) to readers for more analysis.
Example 1 Consider a function f(x, y) = min(x, y) and to attribute the output of f to the inputs at x = 1, y = 1 we consider a baseline x = 0, y = 0. An example non-linear path from the baseline to the input can be (x = 0, y = 0) → (x = 1, y = 0) → (x = 1, y = 1). On this path, f(x, y) = min(x, y) = y after the point (x = 1, y = 0); therefore, gradient integral will return 0 for the attribution score of x and 1 for y (we ignore the infinitesimal part of (x = 0, y = 0)→ (x = 1, y = 0)). Similarly, when choosing a path (x = 0, y = 0) → (x = 0, y = 1) → (x = 1, y = 1), we find x is more important. Only the linear path will return 1 for both variables in this case.
D COUNTERFACTUAL ANALYSIS IN THE BASELINE SELECTION
The discussion in Sec. 6 shows an example where there are two dogs in the image. IG with black baseline shows that the body of the white dog is also useful to the model to predict its label and the black dog is a mix: part of the black dog has positive attributions and the rest is negatively contribute to the prediction. However, our proposed method BIG clearly shows that the most important part is the black dog and then comes to the white dog. To validate where the model is actually using the white dog, we manually remove the black dog or the white dog from the image and see if the model retain its prediction. The result is shown in Fig. 12. Clearly, when removing the black dog, the model changes its prediction from Labrador retriever to English foxhound while removing the white dog does not change the prediction. This result helps to convince the reader that
BIG is more reliable than IG with black baseline in this case as a more faithful explanation to the classification result for this instance.
E ADDITIONAL VISUALIZATIONS FOR BIG
More visualizations comparing BIG with other attributions can be found in Fig. 16 and 17. We show several examples in Fig. 18 when there are more than one objects in the input and we explain the model’s Top1 prediction, where we show that BIG is able to localize the objects that are actually relevant to the predicted label. | 1. What is the novel approach introduced by the paper in explaining non-robust models?
2. What are the strengths of the proposed method, particularly in its empirical results?
3. What are the weaknesses of the paper, especially concerning its conclusions and comparisons with prior works? | Summary Of The Paper
Review | Summary Of The Paper
This paper introduces boundary attributions, which leverage the connection between boundary normal vectors and gradients to yield explanations for non-robust models that carry over many of the favorable properties that have been observed of explanations on robust models. It also proposes a BIG to explain models.
Review
Strengths:
Table 1 shows the empirical results are good.
Weakness:
My major concern for this paper is that the conclusion has already known. For example, Ilyas et al shows that robust models can produce better perceptual aligned features when gradient descent, and adversarial robust models are known to have smooth decision boundary [1].
[1] Theoretically Principled Trade-off between Robustness and Accuracy. ICML 2019. |
ICLR | Title
Learning 3D Point Cloud Embeddings using Optimal Transport
Abstract
Learning embeddings of any data largely depends on the ability of the target space to capture semantic relations. The widely used Euclidean space, where embeddings are represented as point vectors, is known to be lacking in its potential to exploit complex structures and relations. Contrary to standard Euclidean embeddings, in this work, we embed point clouds as discrete probability distributions in Wasserstein space. We build a contrastive learning setup to learn Wasserstein embeddings that can be used as a pre-training method with or without supervision for any downstream task. We show that the features captured by Wasserstein embeddings are better in preserving the point cloud geometry, including both global and local information, thus resulting in improved quality embeddings. We perform exhaustive experiments and demonstrate the effectiveness of our method for point cloud classification, transfer learning, segmentation and interpolation tasks over multiple datasets including synthetic and real-world objects in both supervised and self-supervised settings. We also compare against other existing methods and show that our method outperforms them in all downstream tasks. Additionally, our study reveals a promising interpretation of capturing critical points of point clouds that makes our proposed method self-explainable.
1 INTRODUCTION
Recent years have seen major advancements in 3D point cloud representation learning. It has gained prominence in a wide spectrum of areas such as robotics (Maturana & Scherer, 2015), computer vision (Su et al., 2015), animation (Pan et al., 2020) with a broad range of applications including shape synthesis and modeling (Yi et al., 2016), autonomous driving (Mahjourian et al., 2018), indoor navigation (Zhu et al., 2017). Metric learning for good quality point cloud embeddings is a crucial problem given unique set of challenges associated with 3D data, from processing point clouds in various forms to learning in different spaces. Processing and developing learning methods for point clouds is one of the major challenges due to their irregular, unstructured and unordered nature.
Earlier methods process point clouds by converting them into regular structures like, volumetric representations (Maturana & Scherer, 2015), (Wu et al., 2015) or 2D image projections (Qi et al., 2016), (Su et al., 2015) to employ well explored powerful convolutional techniques. However, these transformations either incur loss of information or require high memory and computational complexity. Later, methods have been developed to learn representations by directly using raw point clouds (Qi et al., 2017a), (Qi et al., 2017b), (Wang et al., 2019). These methods either process each point individually or try to infer features from local regions in a point cloud. The state-of-the-art methods in this category are largely classification, generation or reconstruction-based supervised, unsupervised or self-supervised methods.
The common choice of recent 3D point cloud representation learning methods is to operate and represent point clouds as point vectors in Euclidean spaces, where relation between data points is depicted by either angle or distance. We all know that the embedding space largely determines the quality of embeddings, as it depends on how well the target space can capture the structure of data. Euclidean space is confined in its potential to capture complex structure and possible semantic relations. Realizing these drawbacks, many works use hyperbolic space (Nickel & Kiela, 2018), (Nickel & Kiela, 2017) to capture this uncertainty and asymmetric relationship for word and graph embeddings.
As Euclidean space is constrained in its ability to represent data structures, we need to go beyond Euclidean space to get more expressive embeddings for point clouds. Recent studies show that many spaces can be embedded into Wasserstein space with low distortion (Frogner et al., 2019), this reflects how large Wasserstein spaces are. Recently, Courty et al. (2018) tries to mimic Wasserstein distance in Euclidean space for image embeddings to build efficient methods along with availing the flexibility of Wasserstein space. Also, there are some latest methods for point cloud embeddings using Optimal Transport (OT) based distances. Kawano et al. (2020), motivated by Courty et al. (2018), proposes a method to approximate Wasserstein distance by Euclidean norm between two point cloud embeddings. Since Euclidean space is known for its limited ability, finding isometric low-distortion point cloud embeddings is tough. Another work by Nguyen et al. (2021) presents how Optimal Transport based distances for point cloud reconstruction affect the quality of learnt embeddings. However, this method utilizes OT based distances only for reconstruction loss, which is not enough to learn complex shapes and fails to capture fine details of point clouds.
Motivated by aforementioned limitations and inspired by Frogner et al. (2019), in this paper, we advocate for mapping point cloud as a discrete distribution in Wasserstein space. We build a contrastive learning setup to learn point cloud embeddings. Leveraging the idea of contrasting point clouds against each other, we intend to learn common and distinctive features between same and different distributions, respectively. It can be applied to both supervised and self-supervised settings. For this, Sliced Wasserstein (SW) distance is considered which is a low-cost approximation of Wasserstein distance due to its high computational complexity. Along with comparisons with commonly used distance measures such as L2 norm and Cosine similarity, we also compare our method against recent works on point clouds using OT. We show that the learnt features capture the point cloud structure better than Euclidean embeddings and consistently performs better in multiple 3D analysis and synthesis tasks. We argue that our approach of incorporating OT metric in a contrastive learning setup captures the underlying geometry and global shape pertaining to critical points (as shown in Figure 1) and fine details of a point cloud.
Our contributions: i) To the best of our knowledge, we are the first to propose the use of OT metric which exploits the geometry of the data along with contrastive learning for point clouds. Unlike Euclidean embeddings, we represent a point cloud as a discrete probability distribution in the embedding space. ii) Using this representation, we develop a method to learn Wasserstein embeddings for 3D point clouds endowed by contrastive learning setup. We introduce a novel neural network architecture which takes pairs of point clouds as input. It uses supervised/ self-supervised contrastive loss depending on the availability of labels, to minimize the Wasserstein distance between similar point clouds. A major advantage of our network is it can be used as a pretrained model for any downstream network. iii) We perform exhaustive experiments over a wide variety of tasks (supervised and self-supervised learning for classification, transfer learning, segmentation, and interpolation) for four popular point cloud datasets. We show that our Wasserstein embeddings are better in capturing the inherent geometry of point clouds. Additionally, we study the point cloud embeddings in most commonly used Euclidean space for our proposed architecture by replacing the OT metric with L2 norm (our baseline). We also compare our approach (CL+SW2) against the other existing methods and show that our method outperforms in all the downstream tasks. iv) We further explore the self-explaining aspect of our model and illustrate the 3D Wasserstein features computed by the encoder (as shown in Figure 1). We show Wasserstein embeddings are better in capturing critical points and semantic structure amenable to the optimization task.
2 PRELIMINARIES
In this section, we briefly present the optimal transport metric, variants of Wasserstein distance, and contrastive learning setup which are used in our proposed method.
2.1 OPTIMAL TRANSPORT AND WASSERSTEIN DISTANCE
Optimal transport aims to solve for the most efficient way to transport mass between two probability distributions. Formally, given two probability distributions µ and ν on a metric space X , for p ≥ 1, the p-Wasserstein distance is given by
Wp(µ, ν) =
( inf
π∈Π(µ,ν) ∫ X×X c(x, y)pdπ(x, y) )1/p (1)
where, π is a transport plan that defines a flow between mass from µ to locations in ν, Π(µ, ν) is the joint probability distribution with the marginals µ and ν and c(x, y) is the ground metric which assigns a cost of moving a unit of mass x ∈ X from µ to some location y ∈ X in ν. The cost of moving the mass in µ to match in ν according to the optimal transport plan π∗, is called the Wasserstein distance between the two distributions (Villani, 2003).
The above equation can also be written for discrete distributions, say µ̂ = ∑m
i=1 aiδ(xi) and ν̂ =∑n j=1 bjδ(yj) are two discrete distributions, where, {ai}; i = 1 . . .m and {bj}; j = 1 . . . n are the probability mass that should sum to 1, δ is the Dirac delta function and {xi}; i = 1 . . .m and {yj}; j = 1 . . . n are the support points in Rd with m and n being the number of points in each measure. Then, the discrete version of Equation 1 is
Wp(µ̂, ν̂) =
( min
P∈U(a,b) ⟨Cp, P ⟩
)1/p (2)
where, ⟨·, ·⟩ denotes the Frobenius dot-product, C ∈ Rm×n+ is the pairwise ground metric distance, P is the coupling matrix and U is the set of all possible valid coupling matrices, i.e. U(a, b) = {P ∈ Rm×n : P1n = a, P⊤1m = b}. Interestingly, there exists a closed-form solution for Wasserstein distance only when the distributions are one-dimensional measures with Lp norm as the cost function. The closed-form for Wasserstein distance in 1-D is (Peyré & Cuturi, 2019)
Wp(µ, ν) = (∫ 1 0 |F−1µ (t)− F−1ν (t)|pdt )1/p
(3)
where, F−1µ and F −1 ν are the inverse cumulative distribution functions of µ and ν.
Generally, we are more interested in dimensions greater than one. Thus, we cannot use this closedform solution directly to solve the OT problem efficiently. Instead, the Wasserstein distance between two measures on Rd can be approximated by aggregating the 1-D Wasserstein distance between their projections over multiple directions on a unit sphere, which is called the Sliced Wasserstein distance (Peyré & Cuturi, 2019):
SWp(µ, ν) = (∫ Sd−1 Wp(Pθ,#µ, Pθ,#ν) pdθ )1/p (4)
where, Sd−1 = {θ ∈ Rd : ∥θ∥ = 1} is the d-dimensional unit sphere and Pθ : Rd → R is the projection. Since the projections are now 1-D measures, we can use the closed-form solution given by Equation 3. When m = n, the Sliced Wasserstein distance can be easily computed by simply sorting points in 1-D measures and can be given by:
SWp(µ̂, ν̂) =
( 1
D D∑ k=1 m∑ i=1 |xαθk (i) − yβθk (i)| p
)1/p (5)
where, αθk and βθk are the permutation ordering in the increasing order of the support points projected to the direction θk with D being the total number of directions.
2.2 CONTRASTIVE LEARNING
Contrastive learning aims to learn an embedding space that encourages augmentations of the same input sample to have similar representations and of different samples to be dissimilar. Chopra et al. (2005) is an early example of using contrastive learning in a supervised learning setup which takes pair of samples as input to the network.
On the other hand, the contrastive loss introduced by Chen et al. (2020) is named as SimCLR. It follows batch-wise training and is operated in self-supervised setting. For this setup, the distance is reduced between the sample and its augmentations. Later, Khosla et al. (2020) proposed the extension of SimCLR for supervised setup. It additionally aims at reducing the distance between a sample and other samples from same class in a supervised setting.
3 OUR METHOD
In this section, we discuss our method of computing Wasserstein embeddings for point clouds in a contrastive learning setup as shown in Figure 2. We build an in-batch contrastive learning setup which can either be fully supervised or self-supervised and can be used as a pre-training methodology for any downstream task. The goal is to represent samples from same class closer than the samples from different classes in the embeddings space (larger inter-cluster and smaller intra-cluster distance). Here, the choice of embedding space plays a key role for desirable performance, as individual metric spaces can embed data differently and represent different types of semantic structure.
3.1 CONTRASTIVE LEARNING WITH OPTIMAL TRANSPORT
Let O = {(Pm, lm)}; m = 1 . . .M be a collection of point clouds Pm = {pi}; i = 1 . . . Nm , where, pi ∈ R3 with their corresponding class labels lm ∈ L, where L = {1, . . . C} is a set of class labels. Each point cloud Pm contains Nm number of points defined by 3D space points in x, y and z direction. For defining the batch-wise contrastive loss, we first randomly draw K samples from the collection O, that form a batch B = {(Pm, lm)k}; k = 1 . . .K. For every point cloud Pm ∈ B, we apply fixed set of random transformations T1 and T2 to get two instances of Pm (as shown in Figure 2), giving an augmented batch B′ = {(P ′m, lm)k′}; k′ = 1 . . . 2K. The augmented batch is twice the size of the original batch. The point clouds P ′m indexed at k
′ and k′ + 1 are augmented version of the point cloud Pm indexed at k. As these are augmented versions of Pm[k], their class labels are lm[k′] = lm[k′+1] = lm[k].
The input to the encoder is an augmented batch B′, from which all P ′m needs to be mapped to the embeddings space depending on its geometric features and appearance, with samples having same class label being closer. The encoder represents function f : RNm×3 → W(X ), that maps a point cloud P ′m to the Wasserstein space W(X ), with Wp being the distance metric on W(X ) and X being the ground metric space. We choose R2, R4 and R8 to be our ground metric spaces, in which the corresponding embedding z′m of P ′ m is represented as discrete distribution { 1S · xi}; i = 1 . . . S supported by xi ∈ X with a total of S support points, all with uniform probability mass 1S . In our implementation, we reshape the embedding z′m of P ′ m to obtain the discrete distribution for different ground metric spaces.
Generally, the computation for exact solution of Wp is costly. To make the computation of optimal transport more tractable, we replace the distance metric Wp on Wasserstein space W(X ) by the Sliced Wasserstein distance metric SW p. SW p is a low-cost approximation of Wasserstein distance with computational complexity being O(S logS). For all our experiments, we set the value of p = 2 and number of slices D = 300.
Supervised Contrastive Loss. In the supervised setting, for any P ′m ∈ B′ indexed at k′ with corresponding label lm[k′], the positive set is defined as A = {P ′m ∈ B′ : P ′m = lm[k′]}. We define our supervised contrastive loss for learning point cloud Wasserstein embeddings as:
Lsup = − 2K∑ i=1 log ∑ j∈A j ̸=i exp(−SW 22 (zi, zj))∑ t ̸=i exp(−SW 22 (zi, zt)) (6) The loss tries to minimize the Sliced Wasserstein distance between the embeddings represented as discrete distribution of an anchor and all the samples having the same class in the augmented batch. This can also be easily converted to a self-supervised version by making necessary modifications.
Self-Supervised Contrastive Loss. Contrary to the supervised setting, in self-supervised setting, the class label of point clouds cannot be used in any way to train the encoder. Here, the positive set of any P ′m ∈ B′ contains only the other augmentation of P ′m. If i ∈ {1 . . . 2K} be the index of any P ′m ∈ B′, then, let j(i) be the index of its other augmented sample. We define our self-supervised loss for learning point cloud Wasserstein embeddings as:
Lself = − 2K∑ i=1 log ( exp(−SW 22 (zi, zj(i)))∑ t ̸=i exp(−SW 22 (zi, zt)) ) (7)
Here, only the Sliced Wasserstein distance between embeddings of an anchor and its augmented sample is minimized. Other than the augmented sample, the samples having the same class in the augmented batch are treated as negatives, which might hinder the overall optimization process depending on the batchsize.
4 EXPERIMENTS
Representation that is able to capture good geometric information in a smooth latent space is generally better in various shape understanding and synthesis tasks. To demonstrate the representation power of the learned Wasserstein embeddings compared to Euclidean embeddings, in this section, we present qualitative and quantitative evaluations on multiple tasks: supervised and self-supervised point cloud classification, transfer learning, point cloud segmentation and point cloud interpolation.
Datasets We use ModelNet10 (MN10) and ModelNet40 (MN40) (Wu et al., 2015) to perform experiments on classification. MN40 consists of 12311 CAD models with a total of 40 categories, where 9843 objects are used for training and 2468 for testing. We use the data provided by Qi et al. (2017b), from which we randomly sample 2048 points for each point cloud. MN10 is a subset of MN40 dataset for 10 categories. To evaluate how the learned embeddings perform on real-world data, we also conduct experiments on ScanObjectNN (Uy et al., 2019). It contains object scans with partial occlusions and background making it a challenging dataset. It has 2304 objects for training and 567 for testing from 15 categories. For part segmentation, we use ShapeNetPart (SN) (Yi et al., 2016) that consists of 16681 point clouds from 16 categories and 50 part categories in total.
Pre-training We use a 3-layer MLP followed by a max-pooling layer as our encoder for classification and segmentation tasks. For interpolation, we consider the encoder and decoder proposed by FoldingNet (Yang et al., 2018). In order to perform any downstream task on a particular dataset, the encoder is first pre-trained on the dataset using the contrastive loss explained in Section 3.1 with different distance metrics, followed by testing and evaluation of the desired task. Throughout the experiments, we refer the encoder trained using our method as CL+SW2 followed by the ground metric space in parenthesis. For the transformations required in contrastive loss, intended towards forming augmented instances, we sequentially compose random scaling, rotation and point jittering. In the case of Euclidean distance metrics, the encoder function f : RNm×3 → Rd maps a point cloud to d-dimensional space, that can be interpreted as vectors, with l2-distance or cosine similarity as distance measures. To account for similarity score given by cosine between two vectors depending on their angles, in Eqs. 6, 7 the negative sign in the numerator should be discarded. Note that when training the encoder with cosine similarity as a distance measure, the embeddings are normalized.
Baselines We consider L2-distance and Cosine similarity as distance measures for computing Euclidean embeddings. We train the encoder using our loss (Eqs. 6, 7), by replacing SW 22 (·, ·) with these measures in our method. We also consider recent methods for point clouds using Wasserstein metric i.e., WPCE (Kawano et al., 2020) and SSW-AE (Nguyen et al., 2021) as our baselines. WPCE embeds Wasserstein space into Euclidean space using Siamese network. It considers PointNet (Qi et al., 2017a) based encoder-decoder architecture. The network is trained in such a way that the Euclidean distance mimics the Wasserstein distance between two point clouds. SSW-AE proposed to use SW distance and its variants (max SW and adaptive SW) for reconstruction to learn point cloud embeddings. It tries to supervise PointNet based auto-encoder architecture with different metrics.
4.1 3D OBJECT CLASSIFICATION
We extract point cloud embeddings from a pre-trained encoder and use a simple linear SVM as our classifier. Particularly, we fit a linear SVM classifier on the embeddings acquired by an encoder on the train split and report the overall classification accuracy on the test split. In Figure 1, we can see that features captured by Wasserstein embeddings summarize the overall object geometry in a better way compared to the embeddings learned in Euclidean space. This property also reflects in the classification performance shown in Table 1. We can observe that for both supervised and selfsupervised settings, the classification accuracy with embeddings extracted by the encoder trained with CL+SW2 is higher than that of CL+L2 and CL+Cosine. Thus, compared to Euclidean space, the performance of SW2 is consistently better on all the datasets, which implies that embeddings learnt in Wasserstein space can increase classification accuracy.
We also show that our method is more effective compared to WPCE and SSW-AE. This improvement can be explained by the difference in the approach of extracting Wasserstein embeddings, where in, our methodology introduces usage of OT metric to directly operate in embedding space endowed by contrastive learning. It helps in learning better representations by exploiting the similarities between distributions along with utilizing the flexibility of the target Wasserstein space.
4.2 TRANSFER LEARNING
We examine the generalizing ability of the embeddings acquired by encoders trained with different distance metrics to unseen classes, by performing transfer learning for point cloud classification. We follow the same process as explained in Section 4.1 for reporting the overall classification accuracy. The quantitative comparisons of transfer learning is shown in Table 2. We perform evaluation in two transfer learning settings, MN10 to MN40 and SN to MN40. Here, the encoder is pretrained on MN10 and SN fol-
lowed by evaluation on MN40. In both the settings, the model generalizes to new unseen classes by wielding the knowledge of geometry learned during training. We can see that CL+SW2 consistently performs better than other distance measures and methods in both the transfer learning settings with and without supervision. Results imply that Wasserstein embeddings are better in transferring the knowledge of capturing geometry for yielding good classification performance.
4.3 3D OBJECT PART SEGMENTATION
We train a 3-layer MLP network to predict a class label for all points in a point cloud, where the input to this network is the embedding provided by a pre-trained encoder. In particular, part segmentation requires fine-grain understanding of the local geometry of the objects. Along with the global embedding of the point cloud, per point embeddings acquired before max-pooling are stacked together and passed to the segmentation network. Note that, only the segmentation network weights are optimized, using the standard cross-entropy loss and the encoder’s weights are frozen. We evaluate the performance using mIoU metric. For mIoU of each class, the IoU’s of all parts from that class are averaged. Instance average mIoU is calculated by taking the mean of IoU’s for all the instances. The comparison of average instance mIoU and per class average mIoU for both supervised and self-supervised learning settings are shown in Table 3 and Table 4, respectively. We can see that
the results outperform other distance measures and methods, implying that Wasserstein embeddings are able to capture better fine-grain local information required for the task.
4.4 3D SHAPE INTERPOLATION
We further examine the quality of our learnt space by performing shape interpolation between inter and intra class point cloud instances. The main aim of conducting this task is to examine which learnt space is capable of capturing geometric information needed to generate consistent interpolations of 3D point clouds based on their structure. As interpolation is a synthesis task, we need a decoder network to reconstruct the object from its embedding. For this, we train an encoder-decoder network with our contrastive loss (Eq. 6) on the embeddings for the encoder, along with a reconstruction loss for the decoder. We use the encoder and decoder proposed by FoldingNet, which learns to deform a unit sphere and take the shape of a 3D object’s surface. We found that optimizing the network for better classification performance, as well as getting detailed reconstruction is difficult. As our contrastive loss aims to pull point clouds closer with similar global representations, it becomes difficult to accurately reconstruct the input point cloud without fine-grain characteristic information. A simple way to deal with this issue is to assign weightage to the individual loss terms, with the weights summing to 1. In order to train an encoder-decoder, the total effective loss is defined by taking a weighted sum of our contrastive loss and a reconstruction loss, with weights being 0.2 and 0.8, respectively. We use Chamfer distance as the reconstruction loss. Interpolation results are shown in Figure 3. We can see that the interpolations done using Wasserstein embeddings follow a smooth path with relatively less noisy points. For example, in Figure 3 (b), we can see that for Euclidean, the source chair suddenly transforms to take the shape of target chair, whereas in Wasserstein, the legs of chair smoothly morph to become the base of target chair.
4.5 EXPLAINABILITY
We investigate what makes Wasserstein embeddings perform better as shown in the downstream tasks. We visualize and compare the features captured by Wasserstein embeddings and Euclidean
embeddings in Figure 1. These features are called critical points, as shown by Qi et al. (2017a). The embedding of a point cloud is completely determined by these subset of points. The embedding for a point cloud would be the same, as long as, the set of critical points is unchanged. For a given point cloud, the critical points are those 3D points that contribute to the global embeddings after the max pooling layer. This implies that the number of critical points cannot be greater than that of the embedding size. The selection of critical points is extremely important, as they solely decide the embedding of a point cloud. This makes it clear that for good quality embeddings, critical points should best describe the given point cloud. In Figure 1, we can see that the network intelligently tries to summarize the point cloud by choosing boundary points as the critical points. Our Wasserstein embeddings are able to capture the full skeleton structure of the given point cloud, whereas, critical points captured by Euclidean embeddings are comparatively poor with uneven distribution and missing parts. Thus, we can say that Wasserstein space are indeed better in preserving and capturing geometric structure amenable to the optimization task.
4.6 ABLATION STUDY
We perform point perturbation and point density variation to test their effects on the encoders pretrained with different distance metrics and report the classification accuracy on Modelnet40 as shown in Figure 4. For the point perturbation test, we add Gaussian noise to input point clouds, with standard-deviation of noise varying from 0.01 to 0.1. We can observe that for all noise levels, even with severe distortion, CL+SW2 performs well than that of CL+L2. This implies that discrete representation learnt in Wasserstein space is less prone to performance degradation due to noise in inputs. Further, for varying density test, we randomly sample 8192, 4096, 2048, 1024, 512, 256, 128 points from input point clouds and perform evaluation on them. We can observe that CL+SW2 consistently does better than CL+L2. This shows Wasserstein embeddings are robust towards missing points in the input point cloud.
5 CONCLUSION
In this paper, we proposed to represent point clouds as discrete probability distributions in the Wasserstein space. We built a contrastive learning method to learn Wasserstein embeddings for 3D point clouds. Our proposed method can be used as a pretrained model for any downstream network in supervised and self-supervised settings. Empirically, we found that representations learnt using our pre-training of contrastive learning with Sliced Wasserstein distance captured the structure and underlying geometry better than standard Euclidean embeddings. With improved embeddings, our method outperformed all the existing methods including our baseline with L2 norm and Cosine similarity for all the downstream tasks (classification, segmentation, transfer learning, interpolation) in both supervised and self-supervised settings. We also show an interesting study of our self-explainable method by capturing critical points of point clouds better than embeddings in Euclidean space. For future work, a possible direction is to explore other related problems such as domain adaptation for point clouds using optimal transport. Another interesting aspect is to consider complex datasets including multiple objects and scenes of point clouds.
Reproducibility Statement: Our proposed method is easily reproducible considering pairs of point clouds as input. The network architecture explained in Figure 2 consists of simple MLP layers followed by max pooling and reshaping of embedding. We mention contrastive loss functions for supervised and self-supervised settings in Section 3.1. The pre-training setup is detailed in Experiments Section 4. Datasets used in this paper are well-known in point cloud domain. We provided references for all the datasets in Experiments Section. Our code will be made publicly available after the acceptance of the work. | 1. What is the main contribution of the paper in 3D point cloud embedding?
2. What are the strengths and weaknesses of the proposed approach, particularly in its organization, experimentation, and comparison to other works?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or limitations regarding the method that the author should discuss more?
5. Can the author provide more specific comparisons with other relevant works in 3D point cloud embedding? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a method for 3D point cloud embedding where point clouds are represented as discrete probability distributions in the Wasserstein space. The point clouds are firstly augmented into two instances by random transformations in the batch augmentation. The augmented batch is further mapped to the Wasserstein space through shared MLP. Sliced Wasserstein distance is computed on the augmented batch for contrastive learning. Both supervised and self-supervised contrastive losses based on the SW are proposed. The goal is to represent the samples from the same class closer than the samples from different classes in the Wasserstein embedding space in the pre-training. Experimental results on four down-stream datasets and ablation studies demonstrate the effectiveness of the pretrained embeddings.
Strengths And Weaknesses
Strengths: 1. The paper is well organized. The experiments and analyses are extensive. 2. The effect of embedding the 3D point clouds into the Wasserstein space by contrastive learning is interesting and sound.
Weakness: 1. The baselines are relatively weak. There exist quite a few papers investigating 3D point cloud embedding by self-supervised/unsupervised/contrastive pre-training (e.g. Point-BERT (CVPR2022), POS-BERT(arxiv2022), Point-M2AE (NIPS2022), PointGLR (TPAMI2022)). However, this paper does not cite them and compare with them. 2. The improvements over the weak baselines are small and the proposed method performs worse than other pre-training methods mentioned above. 3. The limitation is not fully discussed.
Clarity, Quality, Novelty And Reproducibility
The paper is well organized. The proposed method seems novel and technically sound. |
ICLR | Title
Learning 3D Point Cloud Embeddings using Optimal Transport
Abstract
Learning embeddings of any data largely depends on the ability of the target space to capture semantic relations. The widely used Euclidean space, where embeddings are represented as point vectors, is known to be lacking in its potential to exploit complex structures and relations. Contrary to standard Euclidean embeddings, in this work, we embed point clouds as discrete probability distributions in Wasserstein space. We build a contrastive learning setup to learn Wasserstein embeddings that can be used as a pre-training method with or without supervision for any downstream task. We show that the features captured by Wasserstein embeddings are better in preserving the point cloud geometry, including both global and local information, thus resulting in improved quality embeddings. We perform exhaustive experiments and demonstrate the effectiveness of our method for point cloud classification, transfer learning, segmentation and interpolation tasks over multiple datasets including synthetic and real-world objects in both supervised and self-supervised settings. We also compare against other existing methods and show that our method outperforms them in all downstream tasks. Additionally, our study reveals a promising interpretation of capturing critical points of point clouds that makes our proposed method self-explainable.
1 INTRODUCTION
Recent years have seen major advancements in 3D point cloud representation learning. It has gained prominence in a wide spectrum of areas such as robotics (Maturana & Scherer, 2015), computer vision (Su et al., 2015), animation (Pan et al., 2020) with a broad range of applications including shape synthesis and modeling (Yi et al., 2016), autonomous driving (Mahjourian et al., 2018), indoor navigation (Zhu et al., 2017). Metric learning for good quality point cloud embeddings is a crucial problem given unique set of challenges associated with 3D data, from processing point clouds in various forms to learning in different spaces. Processing and developing learning methods for point clouds is one of the major challenges due to their irregular, unstructured and unordered nature.
Earlier methods process point clouds by converting them into regular structures like, volumetric representations (Maturana & Scherer, 2015), (Wu et al., 2015) or 2D image projections (Qi et al., 2016), (Su et al., 2015) to employ well explored powerful convolutional techniques. However, these transformations either incur loss of information or require high memory and computational complexity. Later, methods have been developed to learn representations by directly using raw point clouds (Qi et al., 2017a), (Qi et al., 2017b), (Wang et al., 2019). These methods either process each point individually or try to infer features from local regions in a point cloud. The state-of-the-art methods in this category are largely classification, generation or reconstruction-based supervised, unsupervised or self-supervised methods.
The common choice of recent 3D point cloud representation learning methods is to operate and represent point clouds as point vectors in Euclidean spaces, where relation between data points is depicted by either angle or distance. We all know that the embedding space largely determines the quality of embeddings, as it depends on how well the target space can capture the structure of data. Euclidean space is confined in its potential to capture complex structure and possible semantic relations. Realizing these drawbacks, many works use hyperbolic space (Nickel & Kiela, 2018), (Nickel & Kiela, 2017) to capture this uncertainty and asymmetric relationship for word and graph embeddings.
As Euclidean space is constrained in its ability to represent data structures, we need to go beyond Euclidean space to get more expressive embeddings for point clouds. Recent studies show that many spaces can be embedded into Wasserstein space with low distortion (Frogner et al., 2019), this reflects how large Wasserstein spaces are. Recently, Courty et al. (2018) tries to mimic Wasserstein distance in Euclidean space for image embeddings to build efficient methods along with availing the flexibility of Wasserstein space. Also, there are some latest methods for point cloud embeddings using Optimal Transport (OT) based distances. Kawano et al. (2020), motivated by Courty et al. (2018), proposes a method to approximate Wasserstein distance by Euclidean norm between two point cloud embeddings. Since Euclidean space is known for its limited ability, finding isometric low-distortion point cloud embeddings is tough. Another work by Nguyen et al. (2021) presents how Optimal Transport based distances for point cloud reconstruction affect the quality of learnt embeddings. However, this method utilizes OT based distances only for reconstruction loss, which is not enough to learn complex shapes and fails to capture fine details of point clouds.
Motivated by aforementioned limitations and inspired by Frogner et al. (2019), in this paper, we advocate for mapping point cloud as a discrete distribution in Wasserstein space. We build a contrastive learning setup to learn point cloud embeddings. Leveraging the idea of contrasting point clouds against each other, we intend to learn common and distinctive features between same and different distributions, respectively. It can be applied to both supervised and self-supervised settings. For this, Sliced Wasserstein (SW) distance is considered which is a low-cost approximation of Wasserstein distance due to its high computational complexity. Along with comparisons with commonly used distance measures such as L2 norm and Cosine similarity, we also compare our method against recent works on point clouds using OT. We show that the learnt features capture the point cloud structure better than Euclidean embeddings and consistently performs better in multiple 3D analysis and synthesis tasks. We argue that our approach of incorporating OT metric in a contrastive learning setup captures the underlying geometry and global shape pertaining to critical points (as shown in Figure 1) and fine details of a point cloud.
Our contributions: i) To the best of our knowledge, we are the first to propose the use of OT metric which exploits the geometry of the data along with contrastive learning for point clouds. Unlike Euclidean embeddings, we represent a point cloud as a discrete probability distribution in the embedding space. ii) Using this representation, we develop a method to learn Wasserstein embeddings for 3D point clouds endowed by contrastive learning setup. We introduce a novel neural network architecture which takes pairs of point clouds as input. It uses supervised/ self-supervised contrastive loss depending on the availability of labels, to minimize the Wasserstein distance between similar point clouds. A major advantage of our network is it can be used as a pretrained model for any downstream network. iii) We perform exhaustive experiments over a wide variety of tasks (supervised and self-supervised learning for classification, transfer learning, segmentation, and interpolation) for four popular point cloud datasets. We show that our Wasserstein embeddings are better in capturing the inherent geometry of point clouds. Additionally, we study the point cloud embeddings in most commonly used Euclidean space for our proposed architecture by replacing the OT metric with L2 norm (our baseline). We also compare our approach (CL+SW2) against the other existing methods and show that our method outperforms in all the downstream tasks. iv) We further explore the self-explaining aspect of our model and illustrate the 3D Wasserstein features computed by the encoder (as shown in Figure 1). We show Wasserstein embeddings are better in capturing critical points and semantic structure amenable to the optimization task.
2 PRELIMINARIES
In this section, we briefly present the optimal transport metric, variants of Wasserstein distance, and contrastive learning setup which are used in our proposed method.
2.1 OPTIMAL TRANSPORT AND WASSERSTEIN DISTANCE
Optimal transport aims to solve for the most efficient way to transport mass between two probability distributions. Formally, given two probability distributions µ and ν on a metric space X , for p ≥ 1, the p-Wasserstein distance is given by
Wp(µ, ν) =
( inf
π∈Π(µ,ν) ∫ X×X c(x, y)pdπ(x, y) )1/p (1)
where, π is a transport plan that defines a flow between mass from µ to locations in ν, Π(µ, ν) is the joint probability distribution with the marginals µ and ν and c(x, y) is the ground metric which assigns a cost of moving a unit of mass x ∈ X from µ to some location y ∈ X in ν. The cost of moving the mass in µ to match in ν according to the optimal transport plan π∗, is called the Wasserstein distance between the two distributions (Villani, 2003).
The above equation can also be written for discrete distributions, say µ̂ = ∑m
i=1 aiδ(xi) and ν̂ =∑n j=1 bjδ(yj) are two discrete distributions, where, {ai}; i = 1 . . .m and {bj}; j = 1 . . . n are the probability mass that should sum to 1, δ is the Dirac delta function and {xi}; i = 1 . . .m and {yj}; j = 1 . . . n are the support points in Rd with m and n being the number of points in each measure. Then, the discrete version of Equation 1 is
Wp(µ̂, ν̂) =
( min
P∈U(a,b) ⟨Cp, P ⟩
)1/p (2)
where, ⟨·, ·⟩ denotes the Frobenius dot-product, C ∈ Rm×n+ is the pairwise ground metric distance, P is the coupling matrix and U is the set of all possible valid coupling matrices, i.e. U(a, b) = {P ∈ Rm×n : P1n = a, P⊤1m = b}. Interestingly, there exists a closed-form solution for Wasserstein distance only when the distributions are one-dimensional measures with Lp norm as the cost function. The closed-form for Wasserstein distance in 1-D is (Peyré & Cuturi, 2019)
Wp(µ, ν) = (∫ 1 0 |F−1µ (t)− F−1ν (t)|pdt )1/p
(3)
where, F−1µ and F −1 ν are the inverse cumulative distribution functions of µ and ν.
Generally, we are more interested in dimensions greater than one. Thus, we cannot use this closedform solution directly to solve the OT problem efficiently. Instead, the Wasserstein distance between two measures on Rd can be approximated by aggregating the 1-D Wasserstein distance between their projections over multiple directions on a unit sphere, which is called the Sliced Wasserstein distance (Peyré & Cuturi, 2019):
SWp(µ, ν) = (∫ Sd−1 Wp(Pθ,#µ, Pθ,#ν) pdθ )1/p (4)
where, Sd−1 = {θ ∈ Rd : ∥θ∥ = 1} is the d-dimensional unit sphere and Pθ : Rd → R is the projection. Since the projections are now 1-D measures, we can use the closed-form solution given by Equation 3. When m = n, the Sliced Wasserstein distance can be easily computed by simply sorting points in 1-D measures and can be given by:
SWp(µ̂, ν̂) =
( 1
D D∑ k=1 m∑ i=1 |xαθk (i) − yβθk (i)| p
)1/p (5)
where, αθk and βθk are the permutation ordering in the increasing order of the support points projected to the direction θk with D being the total number of directions.
2.2 CONTRASTIVE LEARNING
Contrastive learning aims to learn an embedding space that encourages augmentations of the same input sample to have similar representations and of different samples to be dissimilar. Chopra et al. (2005) is an early example of using contrastive learning in a supervised learning setup which takes pair of samples as input to the network.
On the other hand, the contrastive loss introduced by Chen et al. (2020) is named as SimCLR. It follows batch-wise training and is operated in self-supervised setting. For this setup, the distance is reduced between the sample and its augmentations. Later, Khosla et al. (2020) proposed the extension of SimCLR for supervised setup. It additionally aims at reducing the distance between a sample and other samples from same class in a supervised setting.
3 OUR METHOD
In this section, we discuss our method of computing Wasserstein embeddings for point clouds in a contrastive learning setup as shown in Figure 2. We build an in-batch contrastive learning setup which can either be fully supervised or self-supervised and can be used as a pre-training methodology for any downstream task. The goal is to represent samples from same class closer than the samples from different classes in the embeddings space (larger inter-cluster and smaller intra-cluster distance). Here, the choice of embedding space plays a key role for desirable performance, as individual metric spaces can embed data differently and represent different types of semantic structure.
3.1 CONTRASTIVE LEARNING WITH OPTIMAL TRANSPORT
Let O = {(Pm, lm)}; m = 1 . . .M be a collection of point clouds Pm = {pi}; i = 1 . . . Nm , where, pi ∈ R3 with their corresponding class labels lm ∈ L, where L = {1, . . . C} is a set of class labels. Each point cloud Pm contains Nm number of points defined by 3D space points in x, y and z direction. For defining the batch-wise contrastive loss, we first randomly draw K samples from the collection O, that form a batch B = {(Pm, lm)k}; k = 1 . . .K. For every point cloud Pm ∈ B, we apply fixed set of random transformations T1 and T2 to get two instances of Pm (as shown in Figure 2), giving an augmented batch B′ = {(P ′m, lm)k′}; k′ = 1 . . . 2K. The augmented batch is twice the size of the original batch. The point clouds P ′m indexed at k
′ and k′ + 1 are augmented version of the point cloud Pm indexed at k. As these are augmented versions of Pm[k], their class labels are lm[k′] = lm[k′+1] = lm[k].
The input to the encoder is an augmented batch B′, from which all P ′m needs to be mapped to the embeddings space depending on its geometric features and appearance, with samples having same class label being closer. The encoder represents function f : RNm×3 → W(X ), that maps a point cloud P ′m to the Wasserstein space W(X ), with Wp being the distance metric on W(X ) and X being the ground metric space. We choose R2, R4 and R8 to be our ground metric spaces, in which the corresponding embedding z′m of P ′ m is represented as discrete distribution { 1S · xi}; i = 1 . . . S supported by xi ∈ X with a total of S support points, all with uniform probability mass 1S . In our implementation, we reshape the embedding z′m of P ′ m to obtain the discrete distribution for different ground metric spaces.
Generally, the computation for exact solution of Wp is costly. To make the computation of optimal transport more tractable, we replace the distance metric Wp on Wasserstein space W(X ) by the Sliced Wasserstein distance metric SW p. SW p is a low-cost approximation of Wasserstein distance with computational complexity being O(S logS). For all our experiments, we set the value of p = 2 and number of slices D = 300.
Supervised Contrastive Loss. In the supervised setting, for any P ′m ∈ B′ indexed at k′ with corresponding label lm[k′], the positive set is defined as A = {P ′m ∈ B′ : P ′m = lm[k′]}. We define our supervised contrastive loss for learning point cloud Wasserstein embeddings as:
Lsup = − 2K∑ i=1 log ∑ j∈A j ̸=i exp(−SW 22 (zi, zj))∑ t ̸=i exp(−SW 22 (zi, zt)) (6) The loss tries to minimize the Sliced Wasserstein distance between the embeddings represented as discrete distribution of an anchor and all the samples having the same class in the augmented batch. This can also be easily converted to a self-supervised version by making necessary modifications.
Self-Supervised Contrastive Loss. Contrary to the supervised setting, in self-supervised setting, the class label of point clouds cannot be used in any way to train the encoder. Here, the positive set of any P ′m ∈ B′ contains only the other augmentation of P ′m. If i ∈ {1 . . . 2K} be the index of any P ′m ∈ B′, then, let j(i) be the index of its other augmented sample. We define our self-supervised loss for learning point cloud Wasserstein embeddings as:
Lself = − 2K∑ i=1 log ( exp(−SW 22 (zi, zj(i)))∑ t ̸=i exp(−SW 22 (zi, zt)) ) (7)
Here, only the Sliced Wasserstein distance between embeddings of an anchor and its augmented sample is minimized. Other than the augmented sample, the samples having the same class in the augmented batch are treated as negatives, which might hinder the overall optimization process depending on the batchsize.
4 EXPERIMENTS
Representation that is able to capture good geometric information in a smooth latent space is generally better in various shape understanding and synthesis tasks. To demonstrate the representation power of the learned Wasserstein embeddings compared to Euclidean embeddings, in this section, we present qualitative and quantitative evaluations on multiple tasks: supervised and self-supervised point cloud classification, transfer learning, point cloud segmentation and point cloud interpolation.
Datasets We use ModelNet10 (MN10) and ModelNet40 (MN40) (Wu et al., 2015) to perform experiments on classification. MN40 consists of 12311 CAD models with a total of 40 categories, where 9843 objects are used for training and 2468 for testing. We use the data provided by Qi et al. (2017b), from which we randomly sample 2048 points for each point cloud. MN10 is a subset of MN40 dataset for 10 categories. To evaluate how the learned embeddings perform on real-world data, we also conduct experiments on ScanObjectNN (Uy et al., 2019). It contains object scans with partial occlusions and background making it a challenging dataset. It has 2304 objects for training and 567 for testing from 15 categories. For part segmentation, we use ShapeNetPart (SN) (Yi et al., 2016) that consists of 16681 point clouds from 16 categories and 50 part categories in total.
Pre-training We use a 3-layer MLP followed by a max-pooling layer as our encoder for classification and segmentation tasks. For interpolation, we consider the encoder and decoder proposed by FoldingNet (Yang et al., 2018). In order to perform any downstream task on a particular dataset, the encoder is first pre-trained on the dataset using the contrastive loss explained in Section 3.1 with different distance metrics, followed by testing and evaluation of the desired task. Throughout the experiments, we refer the encoder trained using our method as CL+SW2 followed by the ground metric space in parenthesis. For the transformations required in contrastive loss, intended towards forming augmented instances, we sequentially compose random scaling, rotation and point jittering. In the case of Euclidean distance metrics, the encoder function f : RNm×3 → Rd maps a point cloud to d-dimensional space, that can be interpreted as vectors, with l2-distance or cosine similarity as distance measures. To account for similarity score given by cosine between two vectors depending on their angles, in Eqs. 6, 7 the negative sign in the numerator should be discarded. Note that when training the encoder with cosine similarity as a distance measure, the embeddings are normalized.
Baselines We consider L2-distance and Cosine similarity as distance measures for computing Euclidean embeddings. We train the encoder using our loss (Eqs. 6, 7), by replacing SW 22 (·, ·) with these measures in our method. We also consider recent methods for point clouds using Wasserstein metric i.e., WPCE (Kawano et al., 2020) and SSW-AE (Nguyen et al., 2021) as our baselines. WPCE embeds Wasserstein space into Euclidean space using Siamese network. It considers PointNet (Qi et al., 2017a) based encoder-decoder architecture. The network is trained in such a way that the Euclidean distance mimics the Wasserstein distance between two point clouds. SSW-AE proposed to use SW distance and its variants (max SW and adaptive SW) for reconstruction to learn point cloud embeddings. It tries to supervise PointNet based auto-encoder architecture with different metrics.
4.1 3D OBJECT CLASSIFICATION
We extract point cloud embeddings from a pre-trained encoder and use a simple linear SVM as our classifier. Particularly, we fit a linear SVM classifier on the embeddings acquired by an encoder on the train split and report the overall classification accuracy on the test split. In Figure 1, we can see that features captured by Wasserstein embeddings summarize the overall object geometry in a better way compared to the embeddings learned in Euclidean space. This property also reflects in the classification performance shown in Table 1. We can observe that for both supervised and selfsupervised settings, the classification accuracy with embeddings extracted by the encoder trained with CL+SW2 is higher than that of CL+L2 and CL+Cosine. Thus, compared to Euclidean space, the performance of SW2 is consistently better on all the datasets, which implies that embeddings learnt in Wasserstein space can increase classification accuracy.
We also show that our method is more effective compared to WPCE and SSW-AE. This improvement can be explained by the difference in the approach of extracting Wasserstein embeddings, where in, our methodology introduces usage of OT metric to directly operate in embedding space endowed by contrastive learning. It helps in learning better representations by exploiting the similarities between distributions along with utilizing the flexibility of the target Wasserstein space.
4.2 TRANSFER LEARNING
We examine the generalizing ability of the embeddings acquired by encoders trained with different distance metrics to unseen classes, by performing transfer learning for point cloud classification. We follow the same process as explained in Section 4.1 for reporting the overall classification accuracy. The quantitative comparisons of transfer learning is shown in Table 2. We perform evaluation in two transfer learning settings, MN10 to MN40 and SN to MN40. Here, the encoder is pretrained on MN10 and SN fol-
lowed by evaluation on MN40. In both the settings, the model generalizes to new unseen classes by wielding the knowledge of geometry learned during training. We can see that CL+SW2 consistently performs better than other distance measures and methods in both the transfer learning settings with and without supervision. Results imply that Wasserstein embeddings are better in transferring the knowledge of capturing geometry for yielding good classification performance.
4.3 3D OBJECT PART SEGMENTATION
We train a 3-layer MLP network to predict a class label for all points in a point cloud, where the input to this network is the embedding provided by a pre-trained encoder. In particular, part segmentation requires fine-grain understanding of the local geometry of the objects. Along with the global embedding of the point cloud, per point embeddings acquired before max-pooling are stacked together and passed to the segmentation network. Note that, only the segmentation network weights are optimized, using the standard cross-entropy loss and the encoder’s weights are frozen. We evaluate the performance using mIoU metric. For mIoU of each class, the IoU’s of all parts from that class are averaged. Instance average mIoU is calculated by taking the mean of IoU’s for all the instances. The comparison of average instance mIoU and per class average mIoU for both supervised and self-supervised learning settings are shown in Table 3 and Table 4, respectively. We can see that
the results outperform other distance measures and methods, implying that Wasserstein embeddings are able to capture better fine-grain local information required for the task.
4.4 3D SHAPE INTERPOLATION
We further examine the quality of our learnt space by performing shape interpolation between inter and intra class point cloud instances. The main aim of conducting this task is to examine which learnt space is capable of capturing geometric information needed to generate consistent interpolations of 3D point clouds based on their structure. As interpolation is a synthesis task, we need a decoder network to reconstruct the object from its embedding. For this, we train an encoder-decoder network with our contrastive loss (Eq. 6) on the embeddings for the encoder, along with a reconstruction loss for the decoder. We use the encoder and decoder proposed by FoldingNet, which learns to deform a unit sphere and take the shape of a 3D object’s surface. We found that optimizing the network for better classification performance, as well as getting detailed reconstruction is difficult. As our contrastive loss aims to pull point clouds closer with similar global representations, it becomes difficult to accurately reconstruct the input point cloud without fine-grain characteristic information. A simple way to deal with this issue is to assign weightage to the individual loss terms, with the weights summing to 1. In order to train an encoder-decoder, the total effective loss is defined by taking a weighted sum of our contrastive loss and a reconstruction loss, with weights being 0.2 and 0.8, respectively. We use Chamfer distance as the reconstruction loss. Interpolation results are shown in Figure 3. We can see that the interpolations done using Wasserstein embeddings follow a smooth path with relatively less noisy points. For example, in Figure 3 (b), we can see that for Euclidean, the source chair suddenly transforms to take the shape of target chair, whereas in Wasserstein, the legs of chair smoothly morph to become the base of target chair.
4.5 EXPLAINABILITY
We investigate what makes Wasserstein embeddings perform better as shown in the downstream tasks. We visualize and compare the features captured by Wasserstein embeddings and Euclidean
embeddings in Figure 1. These features are called critical points, as shown by Qi et al. (2017a). The embedding of a point cloud is completely determined by these subset of points. The embedding for a point cloud would be the same, as long as, the set of critical points is unchanged. For a given point cloud, the critical points are those 3D points that contribute to the global embeddings after the max pooling layer. This implies that the number of critical points cannot be greater than that of the embedding size. The selection of critical points is extremely important, as they solely decide the embedding of a point cloud. This makes it clear that for good quality embeddings, critical points should best describe the given point cloud. In Figure 1, we can see that the network intelligently tries to summarize the point cloud by choosing boundary points as the critical points. Our Wasserstein embeddings are able to capture the full skeleton structure of the given point cloud, whereas, critical points captured by Euclidean embeddings are comparatively poor with uneven distribution and missing parts. Thus, we can say that Wasserstein space are indeed better in preserving and capturing geometric structure amenable to the optimization task.
4.6 ABLATION STUDY
We perform point perturbation and point density variation to test their effects on the encoders pretrained with different distance metrics and report the classification accuracy on Modelnet40 as shown in Figure 4. For the point perturbation test, we add Gaussian noise to input point clouds, with standard-deviation of noise varying from 0.01 to 0.1. We can observe that for all noise levels, even with severe distortion, CL+SW2 performs well than that of CL+L2. This implies that discrete representation learnt in Wasserstein space is less prone to performance degradation due to noise in inputs. Further, for varying density test, we randomly sample 8192, 4096, 2048, 1024, 512, 256, 128 points from input point clouds and perform evaluation on them. We can observe that CL+SW2 consistently does better than CL+L2. This shows Wasserstein embeddings are robust towards missing points in the input point cloud.
5 CONCLUSION
In this paper, we proposed to represent point clouds as discrete probability distributions in the Wasserstein space. We built a contrastive learning method to learn Wasserstein embeddings for 3D point clouds. Our proposed method can be used as a pretrained model for any downstream network in supervised and self-supervised settings. Empirically, we found that representations learnt using our pre-training of contrastive learning with Sliced Wasserstein distance captured the structure and underlying geometry better than standard Euclidean embeddings. With improved embeddings, our method outperformed all the existing methods including our baseline with L2 norm and Cosine similarity for all the downstream tasks (classification, segmentation, transfer learning, interpolation) in both supervised and self-supervised settings. We also show an interesting study of our self-explainable method by capturing critical points of point clouds better than embeddings in Euclidean space. For future work, a possible direction is to explore other related problems such as domain adaptation for point clouds using optimal transport. Another interesting aspect is to consider complex datasets including multiple objects and scenes of point clouds.
Reproducibility Statement: Our proposed method is easily reproducible considering pairs of point clouds as input. The network architecture explained in Figure 2 consists of simple MLP layers followed by max pooling and reshaping of embedding. We mention contrastive loss functions for supervised and self-supervised settings in Section 3.1. The pre-training setup is detailed in Experiments Section 4. Datasets used in this paper are well-known in point cloud domain. We provided references for all the datasets in Experiments Section. Our code will be made publicly available after the acceptance of the work. | 1. What is the main contribution of the paper regarding learned embeddings for point clouds?
2. What are the strengths and weaknesses of the proposed approach, particularly in its use of Wasserstein distances and sliced approximation?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What questions or concerns does the reviewer have regarding the paper's methodology, such as the choice of projections, number of samples, space dimension, architecture, and embedding usage? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper presents a new learned embedding for point-clouds, which can trained in a supervised or unsupervised manner. A 3D point-cloud of size N is mapped to a discrete and uniform distribution, which is represented by S (e.g. 128) points in some Euclidean space (e.g. R^2, R^4 or R^8). The embedding is trained in a contrastive framework (SimCLR style) by pulling together distributions that represent point-clouds of the same class (or augmentation). The novel part of this representation is that 'Wasserstein' distances in the target space (at training) are computed (with respect to the L_p norm pairwise cost matrix). Another twist is that instead of computing this distance directly, they suggest using the sliced-Wasserstein approximation (via random projections to 1-D distributions), which can be done rather efficiently at a time of O(|S|log|S|). This embedding, which is pre-trained is shown to be attractive for usage for downstream tasks, that use the embedding as an input. Attractivity is shown compared to the same method with other distance measures (L2 or cosine rather than Wasserstein) as well as to other baseline embeddings of pointclouds.
Strengths And Weaknesses
Strengths:
The idea of embedding point-clouds to a Wasserstein-Space (in which points are (discrete) distributions and distances are Wasserstein distances with respect to point-wise L_p norms) is very attractive in my opinion, and hasn't been done as far as I know for point-clouds.
The embedding is optimized for in a pre-training stage, i.e. irrespective of any downstream task. This has the advantage of having a general-purpose embedding, which can be injected into the training pipeline of relevant point-cloud tasks, and the ability to generalize across tasks or datasets (as shown for 'transfer-leaning' in classification).
The idea of using the sliced approximation of Wasserstein distances seems to work well in practice in this setting, which is an interesting finding.
The variety of tasks on which the embedding is evaluated shows its merits as a general purpose embedding.
The writing is very clear and the paper is very enjoyable to read.
Weaknesses: While I like the direction taken, and can appreciate the empirical results that support it, there are the several following issues:
(1) Why use the sliced approximation of Wasserstein distances? There are many questions around this decision that should have been answered:
How accurate is the approximation? There is no comparison to the full distance computation, or any other alternative one. There is no discussion on the dependence on the number of projections, simply taken to be 400.
How efficient is the computation? There is no discussion or empirical support for the efficiency. It is claimed to be O(|S|log|S|), presumably since it can be computed at the cost of sorting the elements along the projected line. However it should be in practice O(|P||S|log|S|), where |P|=400 is the number of projections and taking into account that |S|=128. Here the question arises: Why not use the more common iterative methods (e.g. Sinkhorn) which are known for very fast O(|S|^2) computation (very low constants).
How is the sliced computation implemented? It obviously should be done in a differentiable manner (to enable backpropagation)
(2) Why consider specifically the pre-training scheme, rather than end-to-end training for a specific task?
Since the embedding is (as I understand) fully differentiable, it could be trained together with the down-stream components, most likely leading to better results.
(3) Ablations: There is no discussion or empirical testing/conclusions regarding:
The number of projections: 400. How was this chosen? Does it not depend on the space dimension, or the number of points |S|?
Number of samples (e.g. 128)
The space dimension. Testing was done on R^2, R^4 and R^8, but no clear implications are seen or discussed
Architecture
(4) It is not explicitly noted - how the embedded point-clouds are used by following components. That is, as I understood, the resulting vector is flattened to a 1-dimensional vector, and this is also probably true for the compared L2 or cosine distance supervision. It is slightly awkward to ignore the structure of the embedding as a multidimensional sample.
(5) Lacking explanation (or validation) of why work with a uniform distribution over samples (equally weighted). Perhaps learning the weights could be helpful.
(6) Interpolation experiment is not very convincing. Visually, I find it hard to understand the differences. I would suggest accompanying the report with some quantitative results (perhaps a measure of the smoothness of the interpolation, by measuring (chamfer?) distances between consecutive samples.
Clarity, Quality, Novelty And Reproducibility
Clarity: Writing is very clear. Quality: Limited. Many points need to be further investigated in and experimented on. Novelty: Sufficient Reproducibility: Problematic, in particular the implementation of the slice Wasserstein distance |
ICLR | Title
Learning 3D Point Cloud Embeddings using Optimal Transport
Abstract
Learning embeddings of any data largely depends on the ability of the target space to capture semantic relations. The widely used Euclidean space, where embeddings are represented as point vectors, is known to be lacking in its potential to exploit complex structures and relations. Contrary to standard Euclidean embeddings, in this work, we embed point clouds as discrete probability distributions in Wasserstein space. We build a contrastive learning setup to learn Wasserstein embeddings that can be used as a pre-training method with or without supervision for any downstream task. We show that the features captured by Wasserstein embeddings are better in preserving the point cloud geometry, including both global and local information, thus resulting in improved quality embeddings. We perform exhaustive experiments and demonstrate the effectiveness of our method for point cloud classification, transfer learning, segmentation and interpolation tasks over multiple datasets including synthetic and real-world objects in both supervised and self-supervised settings. We also compare against other existing methods and show that our method outperforms them in all downstream tasks. Additionally, our study reveals a promising interpretation of capturing critical points of point clouds that makes our proposed method self-explainable.
1 INTRODUCTION
Recent years have seen major advancements in 3D point cloud representation learning. It has gained prominence in a wide spectrum of areas such as robotics (Maturana & Scherer, 2015), computer vision (Su et al., 2015), animation (Pan et al., 2020) with a broad range of applications including shape synthesis and modeling (Yi et al., 2016), autonomous driving (Mahjourian et al., 2018), indoor navigation (Zhu et al., 2017). Metric learning for good quality point cloud embeddings is a crucial problem given unique set of challenges associated with 3D data, from processing point clouds in various forms to learning in different spaces. Processing and developing learning methods for point clouds is one of the major challenges due to their irregular, unstructured and unordered nature.
Earlier methods process point clouds by converting them into regular structures like, volumetric representations (Maturana & Scherer, 2015), (Wu et al., 2015) or 2D image projections (Qi et al., 2016), (Su et al., 2015) to employ well explored powerful convolutional techniques. However, these transformations either incur loss of information or require high memory and computational complexity. Later, methods have been developed to learn representations by directly using raw point clouds (Qi et al., 2017a), (Qi et al., 2017b), (Wang et al., 2019). These methods either process each point individually or try to infer features from local regions in a point cloud. The state-of-the-art methods in this category are largely classification, generation or reconstruction-based supervised, unsupervised or self-supervised methods.
The common choice of recent 3D point cloud representation learning methods is to operate and represent point clouds as point vectors in Euclidean spaces, where relation between data points is depicted by either angle or distance. We all know that the embedding space largely determines the quality of embeddings, as it depends on how well the target space can capture the structure of data. Euclidean space is confined in its potential to capture complex structure and possible semantic relations. Realizing these drawbacks, many works use hyperbolic space (Nickel & Kiela, 2018), (Nickel & Kiela, 2017) to capture this uncertainty and asymmetric relationship for word and graph embeddings.
As Euclidean space is constrained in its ability to represent data structures, we need to go beyond Euclidean space to get more expressive embeddings for point clouds. Recent studies show that many spaces can be embedded into Wasserstein space with low distortion (Frogner et al., 2019), this reflects how large Wasserstein spaces are. Recently, Courty et al. (2018) tries to mimic Wasserstein distance in Euclidean space for image embeddings to build efficient methods along with availing the flexibility of Wasserstein space. Also, there are some latest methods for point cloud embeddings using Optimal Transport (OT) based distances. Kawano et al. (2020), motivated by Courty et al. (2018), proposes a method to approximate Wasserstein distance by Euclidean norm between two point cloud embeddings. Since Euclidean space is known for its limited ability, finding isometric low-distortion point cloud embeddings is tough. Another work by Nguyen et al. (2021) presents how Optimal Transport based distances for point cloud reconstruction affect the quality of learnt embeddings. However, this method utilizes OT based distances only for reconstruction loss, which is not enough to learn complex shapes and fails to capture fine details of point clouds.
Motivated by aforementioned limitations and inspired by Frogner et al. (2019), in this paper, we advocate for mapping point cloud as a discrete distribution in Wasserstein space. We build a contrastive learning setup to learn point cloud embeddings. Leveraging the idea of contrasting point clouds against each other, we intend to learn common and distinctive features between same and different distributions, respectively. It can be applied to both supervised and self-supervised settings. For this, Sliced Wasserstein (SW) distance is considered which is a low-cost approximation of Wasserstein distance due to its high computational complexity. Along with comparisons with commonly used distance measures such as L2 norm and Cosine similarity, we also compare our method against recent works on point clouds using OT. We show that the learnt features capture the point cloud structure better than Euclidean embeddings and consistently performs better in multiple 3D analysis and synthesis tasks. We argue that our approach of incorporating OT metric in a contrastive learning setup captures the underlying geometry and global shape pertaining to critical points (as shown in Figure 1) and fine details of a point cloud.
Our contributions: i) To the best of our knowledge, we are the first to propose the use of OT metric which exploits the geometry of the data along with contrastive learning for point clouds. Unlike Euclidean embeddings, we represent a point cloud as a discrete probability distribution in the embedding space. ii) Using this representation, we develop a method to learn Wasserstein embeddings for 3D point clouds endowed by contrastive learning setup. We introduce a novel neural network architecture which takes pairs of point clouds as input. It uses supervised/ self-supervised contrastive loss depending on the availability of labels, to minimize the Wasserstein distance between similar point clouds. A major advantage of our network is it can be used as a pretrained model for any downstream network. iii) We perform exhaustive experiments over a wide variety of tasks (supervised and self-supervised learning for classification, transfer learning, segmentation, and interpolation) for four popular point cloud datasets. We show that our Wasserstein embeddings are better in capturing the inherent geometry of point clouds. Additionally, we study the point cloud embeddings in most commonly used Euclidean space for our proposed architecture by replacing the OT metric with L2 norm (our baseline). We also compare our approach (CL+SW2) against the other existing methods and show that our method outperforms in all the downstream tasks. iv) We further explore the self-explaining aspect of our model and illustrate the 3D Wasserstein features computed by the encoder (as shown in Figure 1). We show Wasserstein embeddings are better in capturing critical points and semantic structure amenable to the optimization task.
2 PRELIMINARIES
In this section, we briefly present the optimal transport metric, variants of Wasserstein distance, and contrastive learning setup which are used in our proposed method.
2.1 OPTIMAL TRANSPORT AND WASSERSTEIN DISTANCE
Optimal transport aims to solve for the most efficient way to transport mass between two probability distributions. Formally, given two probability distributions µ and ν on a metric space X , for p ≥ 1, the p-Wasserstein distance is given by
Wp(µ, ν) =
( inf
π∈Π(µ,ν) ∫ X×X c(x, y)pdπ(x, y) )1/p (1)
where, π is a transport plan that defines a flow between mass from µ to locations in ν, Π(µ, ν) is the joint probability distribution with the marginals µ and ν and c(x, y) is the ground metric which assigns a cost of moving a unit of mass x ∈ X from µ to some location y ∈ X in ν. The cost of moving the mass in µ to match in ν according to the optimal transport plan π∗, is called the Wasserstein distance between the two distributions (Villani, 2003).
The above equation can also be written for discrete distributions, say µ̂ = ∑m
i=1 aiδ(xi) and ν̂ =∑n j=1 bjδ(yj) are two discrete distributions, where, {ai}; i = 1 . . .m and {bj}; j = 1 . . . n are the probability mass that should sum to 1, δ is the Dirac delta function and {xi}; i = 1 . . .m and {yj}; j = 1 . . . n are the support points in Rd with m and n being the number of points in each measure. Then, the discrete version of Equation 1 is
Wp(µ̂, ν̂) =
( min
P∈U(a,b) ⟨Cp, P ⟩
)1/p (2)
where, ⟨·, ·⟩ denotes the Frobenius dot-product, C ∈ Rm×n+ is the pairwise ground metric distance, P is the coupling matrix and U is the set of all possible valid coupling matrices, i.e. U(a, b) = {P ∈ Rm×n : P1n = a, P⊤1m = b}. Interestingly, there exists a closed-form solution for Wasserstein distance only when the distributions are one-dimensional measures with Lp norm as the cost function. The closed-form for Wasserstein distance in 1-D is (Peyré & Cuturi, 2019)
Wp(µ, ν) = (∫ 1 0 |F−1µ (t)− F−1ν (t)|pdt )1/p
(3)
where, F−1µ and F −1 ν are the inverse cumulative distribution functions of µ and ν.
Generally, we are more interested in dimensions greater than one. Thus, we cannot use this closedform solution directly to solve the OT problem efficiently. Instead, the Wasserstein distance between two measures on Rd can be approximated by aggregating the 1-D Wasserstein distance between their projections over multiple directions on a unit sphere, which is called the Sliced Wasserstein distance (Peyré & Cuturi, 2019):
SWp(µ, ν) = (∫ Sd−1 Wp(Pθ,#µ, Pθ,#ν) pdθ )1/p (4)
where, Sd−1 = {θ ∈ Rd : ∥θ∥ = 1} is the d-dimensional unit sphere and Pθ : Rd → R is the projection. Since the projections are now 1-D measures, we can use the closed-form solution given by Equation 3. When m = n, the Sliced Wasserstein distance can be easily computed by simply sorting points in 1-D measures and can be given by:
SWp(µ̂, ν̂) =
( 1
D D∑ k=1 m∑ i=1 |xαθk (i) − yβθk (i)| p
)1/p (5)
where, αθk and βθk are the permutation ordering in the increasing order of the support points projected to the direction θk with D being the total number of directions.
2.2 CONTRASTIVE LEARNING
Contrastive learning aims to learn an embedding space that encourages augmentations of the same input sample to have similar representations and of different samples to be dissimilar. Chopra et al. (2005) is an early example of using contrastive learning in a supervised learning setup which takes pair of samples as input to the network.
On the other hand, the contrastive loss introduced by Chen et al. (2020) is named as SimCLR. It follows batch-wise training and is operated in self-supervised setting. For this setup, the distance is reduced between the sample and its augmentations. Later, Khosla et al. (2020) proposed the extension of SimCLR for supervised setup. It additionally aims at reducing the distance between a sample and other samples from same class in a supervised setting.
3 OUR METHOD
In this section, we discuss our method of computing Wasserstein embeddings for point clouds in a contrastive learning setup as shown in Figure 2. We build an in-batch contrastive learning setup which can either be fully supervised or self-supervised and can be used as a pre-training methodology for any downstream task. The goal is to represent samples from same class closer than the samples from different classes in the embeddings space (larger inter-cluster and smaller intra-cluster distance). Here, the choice of embedding space plays a key role for desirable performance, as individual metric spaces can embed data differently and represent different types of semantic structure.
3.1 CONTRASTIVE LEARNING WITH OPTIMAL TRANSPORT
Let O = {(Pm, lm)}; m = 1 . . .M be a collection of point clouds Pm = {pi}; i = 1 . . . Nm , where, pi ∈ R3 with their corresponding class labels lm ∈ L, where L = {1, . . . C} is a set of class labels. Each point cloud Pm contains Nm number of points defined by 3D space points in x, y and z direction. For defining the batch-wise contrastive loss, we first randomly draw K samples from the collection O, that form a batch B = {(Pm, lm)k}; k = 1 . . .K. For every point cloud Pm ∈ B, we apply fixed set of random transformations T1 and T2 to get two instances of Pm (as shown in Figure 2), giving an augmented batch B′ = {(P ′m, lm)k′}; k′ = 1 . . . 2K. The augmented batch is twice the size of the original batch. The point clouds P ′m indexed at k
′ and k′ + 1 are augmented version of the point cloud Pm indexed at k. As these are augmented versions of Pm[k], their class labels are lm[k′] = lm[k′+1] = lm[k].
The input to the encoder is an augmented batch B′, from which all P ′m needs to be mapped to the embeddings space depending on its geometric features and appearance, with samples having same class label being closer. The encoder represents function f : RNm×3 → W(X ), that maps a point cloud P ′m to the Wasserstein space W(X ), with Wp being the distance metric on W(X ) and X being the ground metric space. We choose R2, R4 and R8 to be our ground metric spaces, in which the corresponding embedding z′m of P ′ m is represented as discrete distribution { 1S · xi}; i = 1 . . . S supported by xi ∈ X with a total of S support points, all with uniform probability mass 1S . In our implementation, we reshape the embedding z′m of P ′ m to obtain the discrete distribution for different ground metric spaces.
Generally, the computation for exact solution of Wp is costly. To make the computation of optimal transport more tractable, we replace the distance metric Wp on Wasserstein space W(X ) by the Sliced Wasserstein distance metric SW p. SW p is a low-cost approximation of Wasserstein distance with computational complexity being O(S logS). For all our experiments, we set the value of p = 2 and number of slices D = 300.
Supervised Contrastive Loss. In the supervised setting, for any P ′m ∈ B′ indexed at k′ with corresponding label lm[k′], the positive set is defined as A = {P ′m ∈ B′ : P ′m = lm[k′]}. We define our supervised contrastive loss for learning point cloud Wasserstein embeddings as:
Lsup = − 2K∑ i=1 log ∑ j∈A j ̸=i exp(−SW 22 (zi, zj))∑ t ̸=i exp(−SW 22 (zi, zt)) (6) The loss tries to minimize the Sliced Wasserstein distance between the embeddings represented as discrete distribution of an anchor and all the samples having the same class in the augmented batch. This can also be easily converted to a self-supervised version by making necessary modifications.
Self-Supervised Contrastive Loss. Contrary to the supervised setting, in self-supervised setting, the class label of point clouds cannot be used in any way to train the encoder. Here, the positive set of any P ′m ∈ B′ contains only the other augmentation of P ′m. If i ∈ {1 . . . 2K} be the index of any P ′m ∈ B′, then, let j(i) be the index of its other augmented sample. We define our self-supervised loss for learning point cloud Wasserstein embeddings as:
Lself = − 2K∑ i=1 log ( exp(−SW 22 (zi, zj(i)))∑ t ̸=i exp(−SW 22 (zi, zt)) ) (7)
Here, only the Sliced Wasserstein distance between embeddings of an anchor and its augmented sample is minimized. Other than the augmented sample, the samples having the same class in the augmented batch are treated as negatives, which might hinder the overall optimization process depending on the batchsize.
4 EXPERIMENTS
Representation that is able to capture good geometric information in a smooth latent space is generally better in various shape understanding and synthesis tasks. To demonstrate the representation power of the learned Wasserstein embeddings compared to Euclidean embeddings, in this section, we present qualitative and quantitative evaluations on multiple tasks: supervised and self-supervised point cloud classification, transfer learning, point cloud segmentation and point cloud interpolation.
Datasets We use ModelNet10 (MN10) and ModelNet40 (MN40) (Wu et al., 2015) to perform experiments on classification. MN40 consists of 12311 CAD models with a total of 40 categories, where 9843 objects are used for training and 2468 for testing. We use the data provided by Qi et al. (2017b), from which we randomly sample 2048 points for each point cloud. MN10 is a subset of MN40 dataset for 10 categories. To evaluate how the learned embeddings perform on real-world data, we also conduct experiments on ScanObjectNN (Uy et al., 2019). It contains object scans with partial occlusions and background making it a challenging dataset. It has 2304 objects for training and 567 for testing from 15 categories. For part segmentation, we use ShapeNetPart (SN) (Yi et al., 2016) that consists of 16681 point clouds from 16 categories and 50 part categories in total.
Pre-training We use a 3-layer MLP followed by a max-pooling layer as our encoder for classification and segmentation tasks. For interpolation, we consider the encoder and decoder proposed by FoldingNet (Yang et al., 2018). In order to perform any downstream task on a particular dataset, the encoder is first pre-trained on the dataset using the contrastive loss explained in Section 3.1 with different distance metrics, followed by testing and evaluation of the desired task. Throughout the experiments, we refer the encoder trained using our method as CL+SW2 followed by the ground metric space in parenthesis. For the transformations required in contrastive loss, intended towards forming augmented instances, we sequentially compose random scaling, rotation and point jittering. In the case of Euclidean distance metrics, the encoder function f : RNm×3 → Rd maps a point cloud to d-dimensional space, that can be interpreted as vectors, with l2-distance or cosine similarity as distance measures. To account for similarity score given by cosine between two vectors depending on their angles, in Eqs. 6, 7 the negative sign in the numerator should be discarded. Note that when training the encoder with cosine similarity as a distance measure, the embeddings are normalized.
Baselines We consider L2-distance and Cosine similarity as distance measures for computing Euclidean embeddings. We train the encoder using our loss (Eqs. 6, 7), by replacing SW 22 (·, ·) with these measures in our method. We also consider recent methods for point clouds using Wasserstein metric i.e., WPCE (Kawano et al., 2020) and SSW-AE (Nguyen et al., 2021) as our baselines. WPCE embeds Wasserstein space into Euclidean space using Siamese network. It considers PointNet (Qi et al., 2017a) based encoder-decoder architecture. The network is trained in such a way that the Euclidean distance mimics the Wasserstein distance between two point clouds. SSW-AE proposed to use SW distance and its variants (max SW and adaptive SW) for reconstruction to learn point cloud embeddings. It tries to supervise PointNet based auto-encoder architecture with different metrics.
4.1 3D OBJECT CLASSIFICATION
We extract point cloud embeddings from a pre-trained encoder and use a simple linear SVM as our classifier. Particularly, we fit a linear SVM classifier on the embeddings acquired by an encoder on the train split and report the overall classification accuracy on the test split. In Figure 1, we can see that features captured by Wasserstein embeddings summarize the overall object geometry in a better way compared to the embeddings learned in Euclidean space. This property also reflects in the classification performance shown in Table 1. We can observe that for both supervised and selfsupervised settings, the classification accuracy with embeddings extracted by the encoder trained with CL+SW2 is higher than that of CL+L2 and CL+Cosine. Thus, compared to Euclidean space, the performance of SW2 is consistently better on all the datasets, which implies that embeddings learnt in Wasserstein space can increase classification accuracy.
We also show that our method is more effective compared to WPCE and SSW-AE. This improvement can be explained by the difference in the approach of extracting Wasserstein embeddings, where in, our methodology introduces usage of OT metric to directly operate in embedding space endowed by contrastive learning. It helps in learning better representations by exploiting the similarities between distributions along with utilizing the flexibility of the target Wasserstein space.
4.2 TRANSFER LEARNING
We examine the generalizing ability of the embeddings acquired by encoders trained with different distance metrics to unseen classes, by performing transfer learning for point cloud classification. We follow the same process as explained in Section 4.1 for reporting the overall classification accuracy. The quantitative comparisons of transfer learning is shown in Table 2. We perform evaluation in two transfer learning settings, MN10 to MN40 and SN to MN40. Here, the encoder is pretrained on MN10 and SN fol-
lowed by evaluation on MN40. In both the settings, the model generalizes to new unseen classes by wielding the knowledge of geometry learned during training. We can see that CL+SW2 consistently performs better than other distance measures and methods in both the transfer learning settings with and without supervision. Results imply that Wasserstein embeddings are better in transferring the knowledge of capturing geometry for yielding good classification performance.
4.3 3D OBJECT PART SEGMENTATION
We train a 3-layer MLP network to predict a class label for all points in a point cloud, where the input to this network is the embedding provided by a pre-trained encoder. In particular, part segmentation requires fine-grain understanding of the local geometry of the objects. Along with the global embedding of the point cloud, per point embeddings acquired before max-pooling are stacked together and passed to the segmentation network. Note that, only the segmentation network weights are optimized, using the standard cross-entropy loss and the encoder’s weights are frozen. We evaluate the performance using mIoU metric. For mIoU of each class, the IoU’s of all parts from that class are averaged. Instance average mIoU is calculated by taking the mean of IoU’s for all the instances. The comparison of average instance mIoU and per class average mIoU for both supervised and self-supervised learning settings are shown in Table 3 and Table 4, respectively. We can see that
the results outperform other distance measures and methods, implying that Wasserstein embeddings are able to capture better fine-grain local information required for the task.
4.4 3D SHAPE INTERPOLATION
We further examine the quality of our learnt space by performing shape interpolation between inter and intra class point cloud instances. The main aim of conducting this task is to examine which learnt space is capable of capturing geometric information needed to generate consistent interpolations of 3D point clouds based on their structure. As interpolation is a synthesis task, we need a decoder network to reconstruct the object from its embedding. For this, we train an encoder-decoder network with our contrastive loss (Eq. 6) on the embeddings for the encoder, along with a reconstruction loss for the decoder. We use the encoder and decoder proposed by FoldingNet, which learns to deform a unit sphere and take the shape of a 3D object’s surface. We found that optimizing the network for better classification performance, as well as getting detailed reconstruction is difficult. As our contrastive loss aims to pull point clouds closer with similar global representations, it becomes difficult to accurately reconstruct the input point cloud without fine-grain characteristic information. A simple way to deal with this issue is to assign weightage to the individual loss terms, with the weights summing to 1. In order to train an encoder-decoder, the total effective loss is defined by taking a weighted sum of our contrastive loss and a reconstruction loss, with weights being 0.2 and 0.8, respectively. We use Chamfer distance as the reconstruction loss. Interpolation results are shown in Figure 3. We can see that the interpolations done using Wasserstein embeddings follow a smooth path with relatively less noisy points. For example, in Figure 3 (b), we can see that for Euclidean, the source chair suddenly transforms to take the shape of target chair, whereas in Wasserstein, the legs of chair smoothly morph to become the base of target chair.
4.5 EXPLAINABILITY
We investigate what makes Wasserstein embeddings perform better as shown in the downstream tasks. We visualize and compare the features captured by Wasserstein embeddings and Euclidean
embeddings in Figure 1. These features are called critical points, as shown by Qi et al. (2017a). The embedding of a point cloud is completely determined by these subset of points. The embedding for a point cloud would be the same, as long as, the set of critical points is unchanged. For a given point cloud, the critical points are those 3D points that contribute to the global embeddings after the max pooling layer. This implies that the number of critical points cannot be greater than that of the embedding size. The selection of critical points is extremely important, as they solely decide the embedding of a point cloud. This makes it clear that for good quality embeddings, critical points should best describe the given point cloud. In Figure 1, we can see that the network intelligently tries to summarize the point cloud by choosing boundary points as the critical points. Our Wasserstein embeddings are able to capture the full skeleton structure of the given point cloud, whereas, critical points captured by Euclidean embeddings are comparatively poor with uneven distribution and missing parts. Thus, we can say that Wasserstein space are indeed better in preserving and capturing geometric structure amenable to the optimization task.
4.6 ABLATION STUDY
We perform point perturbation and point density variation to test their effects on the encoders pretrained with different distance metrics and report the classification accuracy on Modelnet40 as shown in Figure 4. For the point perturbation test, we add Gaussian noise to input point clouds, with standard-deviation of noise varying from 0.01 to 0.1. We can observe that for all noise levels, even with severe distortion, CL+SW2 performs well than that of CL+L2. This implies that discrete representation learnt in Wasserstein space is less prone to performance degradation due to noise in inputs. Further, for varying density test, we randomly sample 8192, 4096, 2048, 1024, 512, 256, 128 points from input point clouds and perform evaluation on them. We can observe that CL+SW2 consistently does better than CL+L2. This shows Wasserstein embeddings are robust towards missing points in the input point cloud.
5 CONCLUSION
In this paper, we proposed to represent point clouds as discrete probability distributions in the Wasserstein space. We built a contrastive learning method to learn Wasserstein embeddings for 3D point clouds. Our proposed method can be used as a pretrained model for any downstream network in supervised and self-supervised settings. Empirically, we found that representations learnt using our pre-training of contrastive learning with Sliced Wasserstein distance captured the structure and underlying geometry better than standard Euclidean embeddings. With improved embeddings, our method outperformed all the existing methods including our baseline with L2 norm and Cosine similarity for all the downstream tasks (classification, segmentation, transfer learning, interpolation) in both supervised and self-supervised settings. We also show an interesting study of our self-explainable method by capturing critical points of point clouds better than embeddings in Euclidean space. For future work, a possible direction is to explore other related problems such as domain adaptation for point clouds using optimal transport. Another interesting aspect is to consider complex datasets including multiple objects and scenes of point clouds.
Reproducibility Statement: Our proposed method is easily reproducible considering pairs of point clouds as input. The network architecture explained in Figure 2 consists of simple MLP layers followed by max pooling and reshaping of embedding. We mention contrastive loss functions for supervised and self-supervised settings in Section 3.1. The pre-training setup is detailed in Experiments Section 4. Datasets used in this paper are well-known in point cloud domain. We provided references for all the datasets in Experiments Section. Our code will be made publicly available after the acceptance of the work. | 1. What is the focus and contribution of the paper regarding contrastive learning?
2. What are the strengths and weaknesses of the proposed approach, particularly in its use of sliced Wasserstein distance?
3. Do you have any concerns or questions regarding the training setup and fine-tuning process?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any recent works that the reviewer thinks the authors should consider for comparison? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes to use sliced Wasserstein distance as a loss function to perform contrastive learning.
Strengths And Weaknesses
+: The paper is well-written, the technical part is clear.
+: Using sliced Wasserstein distance is interesting as a method.
-: I could not find the training setup. There are all the mentions of "self-supervised pre-training" and "supervised pre-training". However, there does not seem to be any fine-tuning process and it was not clear how many labels were used in the supervised pre-training and whether there is a fine-tuning step.
-: The results seem very bad. Numbers are significantly lower than other prior work that was not cited by the paper:
H. Wang et al. Unsupervised Point Cloud Pre-training via Occlusion Completion. ICCV 2021 B. Eckart et al. Self-Supervised Learning on 3D Point Clouds by Learning Discrete Generative Models. CVPR 2021 X. Yu et al. Point-BERT: Pre-training 3D Point Cloud Transformers with Masked Point Modeling. CVPR 2022 S. Xie et al. PointContrast: Unsupervised Pre-training for 3D Point Cloud Understanding. ECCV 2020
Some of these work such as PointBERT are using different model architectures so it's probably OK to not beat them. But the authors in general seems to be too careless about related work (missed numerous citations) and their model performance (3 layers MLP with max-pooling is not going to get you very far). Because of the very low performance, it's difficult to judge the merit of the technical contribution.
Clarity, Quality, Novelty And Reproducibility
The paper is well-written and very novel. However the experiment seems to be careless and performance trails too much w.r.t. state-of-the-art (like 10%-20%). |
ICLR | Title
Learning 3D Point Cloud Embeddings using Optimal Transport
Abstract
Learning embeddings of any data largely depends on the ability of the target space to capture semantic relations. The widely used Euclidean space, where embeddings are represented as point vectors, is known to be lacking in its potential to exploit complex structures and relations. Contrary to standard Euclidean embeddings, in this work, we embed point clouds as discrete probability distributions in Wasserstein space. We build a contrastive learning setup to learn Wasserstein embeddings that can be used as a pre-training method with or without supervision for any downstream task. We show that the features captured by Wasserstein embeddings are better in preserving the point cloud geometry, including both global and local information, thus resulting in improved quality embeddings. We perform exhaustive experiments and demonstrate the effectiveness of our method for point cloud classification, transfer learning, segmentation and interpolation tasks over multiple datasets including synthetic and real-world objects in both supervised and self-supervised settings. We also compare against other existing methods and show that our method outperforms them in all downstream tasks. Additionally, our study reveals a promising interpretation of capturing critical points of point clouds that makes our proposed method self-explainable.
1 INTRODUCTION
Recent years have seen major advancements in 3D point cloud representation learning. It has gained prominence in a wide spectrum of areas such as robotics (Maturana & Scherer, 2015), computer vision (Su et al., 2015), animation (Pan et al., 2020) with a broad range of applications including shape synthesis and modeling (Yi et al., 2016), autonomous driving (Mahjourian et al., 2018), indoor navigation (Zhu et al., 2017). Metric learning for good quality point cloud embeddings is a crucial problem given unique set of challenges associated with 3D data, from processing point clouds in various forms to learning in different spaces. Processing and developing learning methods for point clouds is one of the major challenges due to their irregular, unstructured and unordered nature.
Earlier methods process point clouds by converting them into regular structures like, volumetric representations (Maturana & Scherer, 2015), (Wu et al., 2015) or 2D image projections (Qi et al., 2016), (Su et al., 2015) to employ well explored powerful convolutional techniques. However, these transformations either incur loss of information or require high memory and computational complexity. Later, methods have been developed to learn representations by directly using raw point clouds (Qi et al., 2017a), (Qi et al., 2017b), (Wang et al., 2019). These methods either process each point individually or try to infer features from local regions in a point cloud. The state-of-the-art methods in this category are largely classification, generation or reconstruction-based supervised, unsupervised or self-supervised methods.
The common choice of recent 3D point cloud representation learning methods is to operate and represent point clouds as point vectors in Euclidean spaces, where relation between data points is depicted by either angle or distance. We all know that the embedding space largely determines the quality of embeddings, as it depends on how well the target space can capture the structure of data. Euclidean space is confined in its potential to capture complex structure and possible semantic relations. Realizing these drawbacks, many works use hyperbolic space (Nickel & Kiela, 2018), (Nickel & Kiela, 2017) to capture this uncertainty and asymmetric relationship for word and graph embeddings.
As Euclidean space is constrained in its ability to represent data structures, we need to go beyond Euclidean space to get more expressive embeddings for point clouds. Recent studies show that many spaces can be embedded into Wasserstein space with low distortion (Frogner et al., 2019), this reflects how large Wasserstein spaces are. Recently, Courty et al. (2018) tries to mimic Wasserstein distance in Euclidean space for image embeddings to build efficient methods along with availing the flexibility of Wasserstein space. Also, there are some latest methods for point cloud embeddings using Optimal Transport (OT) based distances. Kawano et al. (2020), motivated by Courty et al. (2018), proposes a method to approximate Wasserstein distance by Euclidean norm between two point cloud embeddings. Since Euclidean space is known for its limited ability, finding isometric low-distortion point cloud embeddings is tough. Another work by Nguyen et al. (2021) presents how Optimal Transport based distances for point cloud reconstruction affect the quality of learnt embeddings. However, this method utilizes OT based distances only for reconstruction loss, which is not enough to learn complex shapes and fails to capture fine details of point clouds.
Motivated by aforementioned limitations and inspired by Frogner et al. (2019), in this paper, we advocate for mapping point cloud as a discrete distribution in Wasserstein space. We build a contrastive learning setup to learn point cloud embeddings. Leveraging the idea of contrasting point clouds against each other, we intend to learn common and distinctive features between same and different distributions, respectively. It can be applied to both supervised and self-supervised settings. For this, Sliced Wasserstein (SW) distance is considered which is a low-cost approximation of Wasserstein distance due to its high computational complexity. Along with comparisons with commonly used distance measures such as L2 norm and Cosine similarity, we also compare our method against recent works on point clouds using OT. We show that the learnt features capture the point cloud structure better than Euclidean embeddings and consistently performs better in multiple 3D analysis and synthesis tasks. We argue that our approach of incorporating OT metric in a contrastive learning setup captures the underlying geometry and global shape pertaining to critical points (as shown in Figure 1) and fine details of a point cloud.
Our contributions: i) To the best of our knowledge, we are the first to propose the use of OT metric which exploits the geometry of the data along with contrastive learning for point clouds. Unlike Euclidean embeddings, we represent a point cloud as a discrete probability distribution in the embedding space. ii) Using this representation, we develop a method to learn Wasserstein embeddings for 3D point clouds endowed by contrastive learning setup. We introduce a novel neural network architecture which takes pairs of point clouds as input. It uses supervised/ self-supervised contrastive loss depending on the availability of labels, to minimize the Wasserstein distance between similar point clouds. A major advantage of our network is it can be used as a pretrained model for any downstream network. iii) We perform exhaustive experiments over a wide variety of tasks (supervised and self-supervised learning for classification, transfer learning, segmentation, and interpolation) for four popular point cloud datasets. We show that our Wasserstein embeddings are better in capturing the inherent geometry of point clouds. Additionally, we study the point cloud embeddings in most commonly used Euclidean space for our proposed architecture by replacing the OT metric with L2 norm (our baseline). We also compare our approach (CL+SW2) against the other existing methods and show that our method outperforms in all the downstream tasks. iv) We further explore the self-explaining aspect of our model and illustrate the 3D Wasserstein features computed by the encoder (as shown in Figure 1). We show Wasserstein embeddings are better in capturing critical points and semantic structure amenable to the optimization task.
2 PRELIMINARIES
In this section, we briefly present the optimal transport metric, variants of Wasserstein distance, and contrastive learning setup which are used in our proposed method.
2.1 OPTIMAL TRANSPORT AND WASSERSTEIN DISTANCE
Optimal transport aims to solve for the most efficient way to transport mass between two probability distributions. Formally, given two probability distributions µ and ν on a metric space X , for p ≥ 1, the p-Wasserstein distance is given by
Wp(µ, ν) =
( inf
π∈Π(µ,ν) ∫ X×X c(x, y)pdπ(x, y) )1/p (1)
where, π is a transport plan that defines a flow between mass from µ to locations in ν, Π(µ, ν) is the joint probability distribution with the marginals µ and ν and c(x, y) is the ground metric which assigns a cost of moving a unit of mass x ∈ X from µ to some location y ∈ X in ν. The cost of moving the mass in µ to match in ν according to the optimal transport plan π∗, is called the Wasserstein distance between the two distributions (Villani, 2003).
The above equation can also be written for discrete distributions, say µ̂ = ∑m
i=1 aiδ(xi) and ν̂ =∑n j=1 bjδ(yj) are two discrete distributions, where, {ai}; i = 1 . . .m and {bj}; j = 1 . . . n are the probability mass that should sum to 1, δ is the Dirac delta function and {xi}; i = 1 . . .m and {yj}; j = 1 . . . n are the support points in Rd with m and n being the number of points in each measure. Then, the discrete version of Equation 1 is
Wp(µ̂, ν̂) =
( min
P∈U(a,b) ⟨Cp, P ⟩
)1/p (2)
where, ⟨·, ·⟩ denotes the Frobenius dot-product, C ∈ Rm×n+ is the pairwise ground metric distance, P is the coupling matrix and U is the set of all possible valid coupling matrices, i.e. U(a, b) = {P ∈ Rm×n : P1n = a, P⊤1m = b}. Interestingly, there exists a closed-form solution for Wasserstein distance only when the distributions are one-dimensional measures with Lp norm as the cost function. The closed-form for Wasserstein distance in 1-D is (Peyré & Cuturi, 2019)
Wp(µ, ν) = (∫ 1 0 |F−1µ (t)− F−1ν (t)|pdt )1/p
(3)
where, F−1µ and F −1 ν are the inverse cumulative distribution functions of µ and ν.
Generally, we are more interested in dimensions greater than one. Thus, we cannot use this closedform solution directly to solve the OT problem efficiently. Instead, the Wasserstein distance between two measures on Rd can be approximated by aggregating the 1-D Wasserstein distance between their projections over multiple directions on a unit sphere, which is called the Sliced Wasserstein distance (Peyré & Cuturi, 2019):
SWp(µ, ν) = (∫ Sd−1 Wp(Pθ,#µ, Pθ,#ν) pdθ )1/p (4)
where, Sd−1 = {θ ∈ Rd : ∥θ∥ = 1} is the d-dimensional unit sphere and Pθ : Rd → R is the projection. Since the projections are now 1-D measures, we can use the closed-form solution given by Equation 3. When m = n, the Sliced Wasserstein distance can be easily computed by simply sorting points in 1-D measures and can be given by:
SWp(µ̂, ν̂) =
( 1
D D∑ k=1 m∑ i=1 |xαθk (i) − yβθk (i)| p
)1/p (5)
where, αθk and βθk are the permutation ordering in the increasing order of the support points projected to the direction θk with D being the total number of directions.
2.2 CONTRASTIVE LEARNING
Contrastive learning aims to learn an embedding space that encourages augmentations of the same input sample to have similar representations and of different samples to be dissimilar. Chopra et al. (2005) is an early example of using contrastive learning in a supervised learning setup which takes pair of samples as input to the network.
On the other hand, the contrastive loss introduced by Chen et al. (2020) is named as SimCLR. It follows batch-wise training and is operated in self-supervised setting. For this setup, the distance is reduced between the sample and its augmentations. Later, Khosla et al. (2020) proposed the extension of SimCLR for supervised setup. It additionally aims at reducing the distance between a sample and other samples from same class in a supervised setting.
3 OUR METHOD
In this section, we discuss our method of computing Wasserstein embeddings for point clouds in a contrastive learning setup as shown in Figure 2. We build an in-batch contrastive learning setup which can either be fully supervised or self-supervised and can be used as a pre-training methodology for any downstream task. The goal is to represent samples from same class closer than the samples from different classes in the embeddings space (larger inter-cluster and smaller intra-cluster distance). Here, the choice of embedding space plays a key role for desirable performance, as individual metric spaces can embed data differently and represent different types of semantic structure.
3.1 CONTRASTIVE LEARNING WITH OPTIMAL TRANSPORT
Let O = {(Pm, lm)}; m = 1 . . .M be a collection of point clouds Pm = {pi}; i = 1 . . . Nm , where, pi ∈ R3 with their corresponding class labels lm ∈ L, where L = {1, . . . C} is a set of class labels. Each point cloud Pm contains Nm number of points defined by 3D space points in x, y and z direction. For defining the batch-wise contrastive loss, we first randomly draw K samples from the collection O, that form a batch B = {(Pm, lm)k}; k = 1 . . .K. For every point cloud Pm ∈ B, we apply fixed set of random transformations T1 and T2 to get two instances of Pm (as shown in Figure 2), giving an augmented batch B′ = {(P ′m, lm)k′}; k′ = 1 . . . 2K. The augmented batch is twice the size of the original batch. The point clouds P ′m indexed at k
′ and k′ + 1 are augmented version of the point cloud Pm indexed at k. As these are augmented versions of Pm[k], their class labels are lm[k′] = lm[k′+1] = lm[k].
The input to the encoder is an augmented batch B′, from which all P ′m needs to be mapped to the embeddings space depending on its geometric features and appearance, with samples having same class label being closer. The encoder represents function f : RNm×3 → W(X ), that maps a point cloud P ′m to the Wasserstein space W(X ), with Wp being the distance metric on W(X ) and X being the ground metric space. We choose R2, R4 and R8 to be our ground metric spaces, in which the corresponding embedding z′m of P ′ m is represented as discrete distribution { 1S · xi}; i = 1 . . . S supported by xi ∈ X with a total of S support points, all with uniform probability mass 1S . In our implementation, we reshape the embedding z′m of P ′ m to obtain the discrete distribution for different ground metric spaces.
Generally, the computation for exact solution of Wp is costly. To make the computation of optimal transport more tractable, we replace the distance metric Wp on Wasserstein space W(X ) by the Sliced Wasserstein distance metric SW p. SW p is a low-cost approximation of Wasserstein distance with computational complexity being O(S logS). For all our experiments, we set the value of p = 2 and number of slices D = 300.
Supervised Contrastive Loss. In the supervised setting, for any P ′m ∈ B′ indexed at k′ with corresponding label lm[k′], the positive set is defined as A = {P ′m ∈ B′ : P ′m = lm[k′]}. We define our supervised contrastive loss for learning point cloud Wasserstein embeddings as:
Lsup = − 2K∑ i=1 log ∑ j∈A j ̸=i exp(−SW 22 (zi, zj))∑ t ̸=i exp(−SW 22 (zi, zt)) (6) The loss tries to minimize the Sliced Wasserstein distance between the embeddings represented as discrete distribution of an anchor and all the samples having the same class in the augmented batch. This can also be easily converted to a self-supervised version by making necessary modifications.
Self-Supervised Contrastive Loss. Contrary to the supervised setting, in self-supervised setting, the class label of point clouds cannot be used in any way to train the encoder. Here, the positive set of any P ′m ∈ B′ contains only the other augmentation of P ′m. If i ∈ {1 . . . 2K} be the index of any P ′m ∈ B′, then, let j(i) be the index of its other augmented sample. We define our self-supervised loss for learning point cloud Wasserstein embeddings as:
Lself = − 2K∑ i=1 log ( exp(−SW 22 (zi, zj(i)))∑ t ̸=i exp(−SW 22 (zi, zt)) ) (7)
Here, only the Sliced Wasserstein distance between embeddings of an anchor and its augmented sample is minimized. Other than the augmented sample, the samples having the same class in the augmented batch are treated as negatives, which might hinder the overall optimization process depending on the batchsize.
4 EXPERIMENTS
Representation that is able to capture good geometric information in a smooth latent space is generally better in various shape understanding and synthesis tasks. To demonstrate the representation power of the learned Wasserstein embeddings compared to Euclidean embeddings, in this section, we present qualitative and quantitative evaluations on multiple tasks: supervised and self-supervised point cloud classification, transfer learning, point cloud segmentation and point cloud interpolation.
Datasets We use ModelNet10 (MN10) and ModelNet40 (MN40) (Wu et al., 2015) to perform experiments on classification. MN40 consists of 12311 CAD models with a total of 40 categories, where 9843 objects are used for training and 2468 for testing. We use the data provided by Qi et al. (2017b), from which we randomly sample 2048 points for each point cloud. MN10 is a subset of MN40 dataset for 10 categories. To evaluate how the learned embeddings perform on real-world data, we also conduct experiments on ScanObjectNN (Uy et al., 2019). It contains object scans with partial occlusions and background making it a challenging dataset. It has 2304 objects for training and 567 for testing from 15 categories. For part segmentation, we use ShapeNetPart (SN) (Yi et al., 2016) that consists of 16681 point clouds from 16 categories and 50 part categories in total.
Pre-training We use a 3-layer MLP followed by a max-pooling layer as our encoder for classification and segmentation tasks. For interpolation, we consider the encoder and decoder proposed by FoldingNet (Yang et al., 2018). In order to perform any downstream task on a particular dataset, the encoder is first pre-trained on the dataset using the contrastive loss explained in Section 3.1 with different distance metrics, followed by testing and evaluation of the desired task. Throughout the experiments, we refer the encoder trained using our method as CL+SW2 followed by the ground metric space in parenthesis. For the transformations required in contrastive loss, intended towards forming augmented instances, we sequentially compose random scaling, rotation and point jittering. In the case of Euclidean distance metrics, the encoder function f : RNm×3 → Rd maps a point cloud to d-dimensional space, that can be interpreted as vectors, with l2-distance or cosine similarity as distance measures. To account for similarity score given by cosine between two vectors depending on their angles, in Eqs. 6, 7 the negative sign in the numerator should be discarded. Note that when training the encoder with cosine similarity as a distance measure, the embeddings are normalized.
Baselines We consider L2-distance and Cosine similarity as distance measures for computing Euclidean embeddings. We train the encoder using our loss (Eqs. 6, 7), by replacing SW 22 (·, ·) with these measures in our method. We also consider recent methods for point clouds using Wasserstein metric i.e., WPCE (Kawano et al., 2020) and SSW-AE (Nguyen et al., 2021) as our baselines. WPCE embeds Wasserstein space into Euclidean space using Siamese network. It considers PointNet (Qi et al., 2017a) based encoder-decoder architecture. The network is trained in such a way that the Euclidean distance mimics the Wasserstein distance between two point clouds. SSW-AE proposed to use SW distance and its variants (max SW and adaptive SW) for reconstruction to learn point cloud embeddings. It tries to supervise PointNet based auto-encoder architecture with different metrics.
4.1 3D OBJECT CLASSIFICATION
We extract point cloud embeddings from a pre-trained encoder and use a simple linear SVM as our classifier. Particularly, we fit a linear SVM classifier on the embeddings acquired by an encoder on the train split and report the overall classification accuracy on the test split. In Figure 1, we can see that features captured by Wasserstein embeddings summarize the overall object geometry in a better way compared to the embeddings learned in Euclidean space. This property also reflects in the classification performance shown in Table 1. We can observe that for both supervised and selfsupervised settings, the classification accuracy with embeddings extracted by the encoder trained with CL+SW2 is higher than that of CL+L2 and CL+Cosine. Thus, compared to Euclidean space, the performance of SW2 is consistently better on all the datasets, which implies that embeddings learnt in Wasserstein space can increase classification accuracy.
We also show that our method is more effective compared to WPCE and SSW-AE. This improvement can be explained by the difference in the approach of extracting Wasserstein embeddings, where in, our methodology introduces usage of OT metric to directly operate in embedding space endowed by contrastive learning. It helps in learning better representations by exploiting the similarities between distributions along with utilizing the flexibility of the target Wasserstein space.
4.2 TRANSFER LEARNING
We examine the generalizing ability of the embeddings acquired by encoders trained with different distance metrics to unseen classes, by performing transfer learning for point cloud classification. We follow the same process as explained in Section 4.1 for reporting the overall classification accuracy. The quantitative comparisons of transfer learning is shown in Table 2. We perform evaluation in two transfer learning settings, MN10 to MN40 and SN to MN40. Here, the encoder is pretrained on MN10 and SN fol-
lowed by evaluation on MN40. In both the settings, the model generalizes to new unseen classes by wielding the knowledge of geometry learned during training. We can see that CL+SW2 consistently performs better than other distance measures and methods in both the transfer learning settings with and without supervision. Results imply that Wasserstein embeddings are better in transferring the knowledge of capturing geometry for yielding good classification performance.
4.3 3D OBJECT PART SEGMENTATION
We train a 3-layer MLP network to predict a class label for all points in a point cloud, where the input to this network is the embedding provided by a pre-trained encoder. In particular, part segmentation requires fine-grain understanding of the local geometry of the objects. Along with the global embedding of the point cloud, per point embeddings acquired before max-pooling are stacked together and passed to the segmentation network. Note that, only the segmentation network weights are optimized, using the standard cross-entropy loss and the encoder’s weights are frozen. We evaluate the performance using mIoU metric. For mIoU of each class, the IoU’s of all parts from that class are averaged. Instance average mIoU is calculated by taking the mean of IoU’s for all the instances. The comparison of average instance mIoU and per class average mIoU for both supervised and self-supervised learning settings are shown in Table 3 and Table 4, respectively. We can see that
the results outperform other distance measures and methods, implying that Wasserstein embeddings are able to capture better fine-grain local information required for the task.
4.4 3D SHAPE INTERPOLATION
We further examine the quality of our learnt space by performing shape interpolation between inter and intra class point cloud instances. The main aim of conducting this task is to examine which learnt space is capable of capturing geometric information needed to generate consistent interpolations of 3D point clouds based on their structure. As interpolation is a synthesis task, we need a decoder network to reconstruct the object from its embedding. For this, we train an encoder-decoder network with our contrastive loss (Eq. 6) on the embeddings for the encoder, along with a reconstruction loss for the decoder. We use the encoder and decoder proposed by FoldingNet, which learns to deform a unit sphere and take the shape of a 3D object’s surface. We found that optimizing the network for better classification performance, as well as getting detailed reconstruction is difficult. As our contrastive loss aims to pull point clouds closer with similar global representations, it becomes difficult to accurately reconstruct the input point cloud without fine-grain characteristic information. A simple way to deal with this issue is to assign weightage to the individual loss terms, with the weights summing to 1. In order to train an encoder-decoder, the total effective loss is defined by taking a weighted sum of our contrastive loss and a reconstruction loss, with weights being 0.2 and 0.8, respectively. We use Chamfer distance as the reconstruction loss. Interpolation results are shown in Figure 3. We can see that the interpolations done using Wasserstein embeddings follow a smooth path with relatively less noisy points. For example, in Figure 3 (b), we can see that for Euclidean, the source chair suddenly transforms to take the shape of target chair, whereas in Wasserstein, the legs of chair smoothly morph to become the base of target chair.
4.5 EXPLAINABILITY
We investigate what makes Wasserstein embeddings perform better as shown in the downstream tasks. We visualize and compare the features captured by Wasserstein embeddings and Euclidean
embeddings in Figure 1. These features are called critical points, as shown by Qi et al. (2017a). The embedding of a point cloud is completely determined by these subset of points. The embedding for a point cloud would be the same, as long as, the set of critical points is unchanged. For a given point cloud, the critical points are those 3D points that contribute to the global embeddings after the max pooling layer. This implies that the number of critical points cannot be greater than that of the embedding size. The selection of critical points is extremely important, as they solely decide the embedding of a point cloud. This makes it clear that for good quality embeddings, critical points should best describe the given point cloud. In Figure 1, we can see that the network intelligently tries to summarize the point cloud by choosing boundary points as the critical points. Our Wasserstein embeddings are able to capture the full skeleton structure of the given point cloud, whereas, critical points captured by Euclidean embeddings are comparatively poor with uneven distribution and missing parts. Thus, we can say that Wasserstein space are indeed better in preserving and capturing geometric structure amenable to the optimization task.
4.6 ABLATION STUDY
We perform point perturbation and point density variation to test their effects on the encoders pretrained with different distance metrics and report the classification accuracy on Modelnet40 as shown in Figure 4. For the point perturbation test, we add Gaussian noise to input point clouds, with standard-deviation of noise varying from 0.01 to 0.1. We can observe that for all noise levels, even with severe distortion, CL+SW2 performs well than that of CL+L2. This implies that discrete representation learnt in Wasserstein space is less prone to performance degradation due to noise in inputs. Further, for varying density test, we randomly sample 8192, 4096, 2048, 1024, 512, 256, 128 points from input point clouds and perform evaluation on them. We can observe that CL+SW2 consistently does better than CL+L2. This shows Wasserstein embeddings are robust towards missing points in the input point cloud.
5 CONCLUSION
In this paper, we proposed to represent point clouds as discrete probability distributions in the Wasserstein space. We built a contrastive learning method to learn Wasserstein embeddings for 3D point clouds. Our proposed method can be used as a pretrained model for any downstream network in supervised and self-supervised settings. Empirically, we found that representations learnt using our pre-training of contrastive learning with Sliced Wasserstein distance captured the structure and underlying geometry better than standard Euclidean embeddings. With improved embeddings, our method outperformed all the existing methods including our baseline with L2 norm and Cosine similarity for all the downstream tasks (classification, segmentation, transfer learning, interpolation) in both supervised and self-supervised settings. We also show an interesting study of our self-explainable method by capturing critical points of point clouds better than embeddings in Euclidean space. For future work, a possible direction is to explore other related problems such as domain adaptation for point clouds using optimal transport. Another interesting aspect is to consider complex datasets including multiple objects and scenes of point clouds.
Reproducibility Statement: Our proposed method is easily reproducible considering pairs of point clouds as input. The network architecture explained in Figure 2 consists of simple MLP layers followed by max pooling and reshaping of embedding. We mention contrastive loss functions for supervised and self-supervised settings in Section 3.1. The pre-training setup is detailed in Experiments Section 4. Datasets used in this paper are well-known in point cloud domain. We provided references for all the datasets in Experiments Section. Our code will be made publicly available after the acceptance of the work. | 1. What is the main contribution of the paper in terms of point cloud embeddings?
2. What are the strengths of the proposed approach, particularly in comparison to other existing methods?
3. What are the weaknesses of the paper regarding its novelty and technical contributions?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposed a method that leverages optimal transport along with contrastive learning for learning point cloud embeddings for pre-trained models. The take-home message from this paper is to advocate for mapping point clouds as discrete distributions in Wasserstein Space which is advantageous compared to point cloud representation in Euclidean spaces. They compare the developed approach against other existing methods and demonstrate its advantage in several point cloud tasks including classification, transfer learning, segmentation, and interpolation in both supervised and self-supervised settings.
Strengths And Weaknesses
The paper is well-written in clearly describing its methodology and supporting its claim based on qualitative and quantitative evaluations on multiple point cloud tasks. The reviewer particularly values the study of point cloud embedding in Figure 1 which visualizes critical points following the PointNet practice. Figure 1 demonstrates that the resulting embeddings from the proposed method are better in capturing global geometry.
However, the reviewer has concerns about the novelty and tech contributions. The developed approach is a straightforward application of contrastive learning and optimal transport. It is not surprising to the reviewer that embedding point cloud data as probability distributions in a Wasserstein space has been previously studied in [1], in which the authors have demonstrated the advantage of learned Wasserstein embeddings over Euclidean embeddings. The proposed method is therefore limited in novelty and tech contributions given the fact that technically it positions as a paper direct taking benefiting from [1] with trial integration to contrastive learning frameworks.
Clarity, Quality, Novelty And Reproducibility
The authors conducted extensive evaluations across multiple datasets in different tasks, which validates the effectiveness of the proposed method.
It is clear to the reviewer that Wasserstein embeddings are advantageous over Euclidean embeddings on the studied tasks. |
ICLR | Title
Hierarchical Prototypes for Unsupervised Dynamics Generalization in Model-Based Reinforcement Learning
Abstract
Generalization remains a central challenge in model-based reinforcement learning. Recent works attempt to model the environment-specific factor and incorporate it as part of the dynamic prediction to enable generalization to different contexts. By estimating environment-specific factors from historical transitions, earlier research was unable to clearly distinguish environment-specific factors from different environments, resulting in poor performance. To address this issue, we introduce a set of environment prototypes to represent the environmental-specified representation for each environment. By encouraging learned environment-specific factors to resemble their assigned environmental prototypes more closely, the discrimination of factors between different environments will be enhanced. To learn such prototypes in the unsupervised manner, we propose a hierarchical prototypical method which first builds trajectory embeddings according to the trajectory label information, and then hierarchically constructs environmental prototypes from trajectory prototypes sharing similar semantics. Experiments demonstrate that environment-specific factors estimated by our method have superior clustering performance and can improve MBRL’s generalisation performance in six environments consistently.
1 INTRODUCTION
Reinforcement learning (RL) has achieved great success in solving sequential decision-making problems, e.g., board games (Silver et al., 2016; 2017; Schrittwieser et al., 2020), computer games (Mnih et al., 2013; Silver et al., 2018; Vinyals et al., 2019), and robotics (Levine & Abbeel, 2014; Bousmalis et al., 2018), but it still suffers from the low sample efficiency problem, making it challenging to solve real-world problems, especially for those with limited or expensive data (Gottesman et al., 2018; Lu et al., 2018; 2020; Kiran et al., 2020).In contrast, model-based reinforcement learning (MBRL) (Janner et al., 2019; Kaiser et al., 2019; Schrittwieser et al., 2020; Zhang et al., 2019; van Hasselt et al., 2019; Hafner et al., 2019b;a; Lenz et al., 2015) has recently received wider attention, because it explicitly builds a predictive model and can generate samples for learning RL policy to alleviate the sample inefficiency problem.
As a sample-efficient alternative, the model-based RL method derives a policy from the learned environmental dynamics prediction model. Therefore, the dynamics model’s prediction accuracy is highly correlated with policy quality (Janner et al., 2019). However, it has been evidenced that the learned dynamics prediction model is not robust to the change of environmental dynamics (Lee et al., 2020; Seo et al., 2020; Guo et al., 2021), and thus the agent in model-based RL algorithms has a poor generalization ability on the environments with different dynamics. Such a vulnerability to the change in environmental dynamics makes model-based RL methods unreliable in real-world applications where the factors that can affect dynamics are partially observed. For example, the friction coefficient of the ground is usually difficult to measure, while the changes in it can largely affect the dynamics when controlling a robot walking on the grounds, leading to the performance degradation of an agent trained by model-based RL methods (Yang et al., 2019; Gu et al., 2017; Nagabandi et al., 2018b).
Recent Studies (Seo et al., 2020; Nagabandi et al., 2018a; Lee et al., 2020; Guo et al., 2021) have demonstrated that incorporating environmental factor Z into dynamics prediction facilitates the generalisation of model-based RL methods to unseen environments. However, environmental factors
are unobservable in the majority of applications; for instance, the friction coefficient is not available for robots. Therefore, estimating semantical meaningful Z for each environments is the first step for generalization of model-based RL. However, it is not easy to implement, because the environment is hard to label. For example, it is impractical to measure the friction coefficient of every road. Without the label information of environments, Zs estimated from previous methods (Seo et al., 2020; Nagabandi et al., 2018a; Lee et al., 2020; Guo et al., 2021) cannot form clear clusters for different environments as Figure 3 shows. These entangled Zs cannot represent the distinct environmental specific information, and thus may deviate the learned dynamics prediction function from the true one, resulting in the poor generalization ability.
In this paper, we propose a hierarchical prototypical method (HPM) with the objective of learning an environment-specific representation with distinct clusters. By representing environment-specific information semantically meaningfully, HPM learns more generalizable dynamics prediction function. To achieve this, our method propose to construct a set of environmental prototypes to capture environment-specific information for each environment. By enforcing the estimated Ẑ to be more similar to its respective environmental prototypes and dissimilar to other prototypes, the estimated Ẑs can form compact clusters for the purpose of learning a generalizable dynamics prediction function. Because environmental labels are not available, we cannot construct environmental prototypes directly. To address this issue, we begin by developing easily-learned trajectory prototypes based on the trajectory label. Then, environmental prototypes can be created by merging trajectory prototypes with similar semantics, as suggested by the natural hierarchical relationship between trajectory and environment.
With the built hierarchical prototypical structure, we further propose a prototypical relational loss to learn Z from past transitions. Specifically, we not only aggregate the Ẑs with similar causal effects by optimizing the relational loss (Guo et al., 2021) but also aggregate Ẑ with its corresponding trajectory and environmental prototypes via the relational loss. In addition, to alleviate the over-penalization of semantically similar prototypes, we propose to penalize prototypes adaptively with the intervention similarity. In the experiments, we evaluate our method on a range of tasks in OpenAI gym (Brockman et al., 2016) and Mujoco (Todorov et al., 2012). The experimental results show that our method can form more clear and tighter clusters for Ẑs, and such Ẑs can improve the generalization ability of model-based RL methods and achieve state-of-art performance in new environments with different dynamics without any adaptation step.
2 RELATED WORK
Model-based reinforcement learning With the learned dynamics prediction model, Model-based Reinforcement Learning (MBRL) takes advantage of high data efficiency. The learned prediction model can generate samples for training policy (Du & Narasimhan, 2019; Whitney et al., 2019) or planning ahead in the inference (Atkeson & Santamaria, 1997; Lenz et al., 2015; Tassa et al., 2012). Therefore, the performance of MBRL highly relies on the prediction accuracy of the dynamics predictive model. To improve the predictive model’s accuracy of MBRL, several methods were proposed, such as ensemble methods (Chua et al., 2018), latent dynamics model (Hafner et al., 2019b;a; Schrittwieser et al., 2020), and bidirectional prediction (Lai et al., 2020). However, current predictive methods are still hard to generalize well on unseen dynamics, which hinders the application of MBRL methods in the real-world problems.
Dynamics generalization in model-based reinforcement learning To adapt the MBRL to unknown dynamics, meta-learning methods (Nagabandi et al., 2018a;b; Sæmundsson et al., 2018) attempted to update model parameters by updating a small number of gradient updates (Finn et al., 2017) or hidden representations of a recurrent model (Doshi-Velez & Konidaris, 2016). Then, using multi-choice learning, (Lee et al., 2020; Seo et al., 2020) attempted to learn a generalised dynamics model by incorporating environmental-specified information or clustering dynamics implicitly, with the goal of adapting any dynamics without training. Through relational learning and causal effect estimation, RIA (Guo et al., 2021) aims to explicitly learn meaningful environmental-specific information. However, the dynamics change learned by RIA still suffer from a high variance issue.
Prototypical methods By learning an encoder to embed data in a low-dimensional representation space, prototypical methods gain a set of prototypical embeddings, which are referred to as
prototypes (Asano et al., 2020; Caron et al., 2020b) that form the basis of this representation space. Prototypical methods aim to derive compact data representations gathering around corresponding prototypes (Li et al., 2021; Oord et al., 2018; Wang et al., 2021), which captures some basic semantic structures. Therefore, prototypical methods have been applied into many areas, e.g. self-supervised learning (Li et al., 2020; Caron et al., 2020a), few-shot learning (Snell et al., 2017; Bateni et al., 2020; Simon et al., 2020), domain adaptation (Tanwisuth et al., 2021) and continue learning (De Lange & Tuytelaars, 2021; Yu et al., 2020). In the RL area, (Yarats et al., 2021) ties representation learning with exploration through prototypical representations for image-based RL, while our method focuses on the unsupervised dynamics generalization problem in model-based RL, aiming to learn semantical meaningful dynamics change using prototypical method. Specifically, our method propose a hierarchical method to construct environmental prototypes from trajectory prototypes.
3 METHODS
In this section, we first introduce the formulation of the unsupervised dynamic generalization problem in model-based reinforcement learning. Then we present the details of how our hierarchical prototype method learns the environment-specific factors.
3.1 PROBLEM SETUP
We formulate the standard reinforcement learning as a markov decision process (MDP) M = (S,A, r, f, γ, ρ0) over discrete time (Puterman, 2014; Sutton & Barto, 2018), where S , A, γ ∈ (0, 1] and ρ0 are state space, action space, the reward discount factor, and the initial state distribution, respectively. Dynamics function f : S ×A → S gives the next state st+1 conditioned on the current state st and action at, and reward function r : S × A → R specifies the reward at each timestep t given st and at. The goal of RL is to learn a policy π(·|s) mapping from state s ∈ S over the action distribution to maximize the cumulative expected return Est∈S,at∈A[ ∑∞ t=0 γ
t r(st, at)] over timesteps. In model-based RL, we aim to learn a prediction model f̂ to approximate the dynamics function f , and then f̂ can generate training data to train policy π or predict the future sequences for planning. With the data provided by learned dynamics model f̂ , model-based RL has higher data efficiency and better planing ability compared with model-free RL.
In this paper, we consider the unsupervised dynamics generalization problem in model-based RL. Different from the standard reinforcement learning, there exists an unobserved variable Z that can affect the dynamics prediction function f in the dynamics generalization problem. The goal of dynamics generalization is to derive a generalizable policy from given K training MDPs {Mtri }Ki=0, and expect the policy can generalize well on L test MDPs {Mtej }Lj=0. Without losing generality, we assume all MDPs share the same state and action space but preserve different factor Z.
In the context of model-based reinforcement learning, we need to learn the dynamics function before learning policy. In order to generalize the dynamic functions on different environment, we need to incorporate the unobserved variable Z into dynamics prediction process, i.e., extending the dynamics function from f : S ×A → S to f : S ×A × Z → S . Since Z is not available, we should estimate it from past transition segments τt−k:t−1 = {(st−k, at−k), ..., (st−1, at−1)} (Seo et al., 2020; Lee et al., 2020; Guo et al., 2021).
Next, we will present how our hierarchical prototypes method estimates Z, and enable it to learn the dynamics function f that can generalize to environments with unseen dynamics. In Section 3.2, we present how our method hierarchically constructs prototypes as a representative embedding to represent environmental-specific information for each environment. In Section 3.3, we describe how we update prototypes dynamically and how to estimate environmental-specific factors Z from past transition segments using prototypes. Once Z are estimated, we describe how they enable dynamics function f to generalize well environments with different dynamics.
3.2 HIERARCHICAL ENVIRONMENT PROTOTYPES CONSTRUCTION
The objective of our method is to construct a set of prototypes to represent the environmental-specific information for each environment, and guide the context encoder to estimate environmental-specific variable Z from historical transition segments. In each training iteration, we randomly sample a trajectory from a subset of MDPs in the training MDPs. Because labels of MDPs are not available, we cannot estimate environmental prototypes directly. Furtunately, we still have the trajectory label information, and thus we can construct the prototypes for each sampled trajectory first. Specifically, we denote the prototype for j-th trajectory as cjtra. Because different trajectories may be sampled from a single environment, the trajectory prototypes from the same environment should share similar semantics for dynamics prediction. Therefore, we can construct environmental prototypes hierarchically from trajectory prototypes sharing similar semantics. In this way, environmental prototypes and trajectory prototypes form a natural hierarchical structure, and environmental prototypes can be constructed utilising trajectory label information even if no environmental label is available.
If we denote the wi,jtra as the semantical similarity between the trajectory prototypes c i tra and c j tra, we can construct a trajectory similarity matrix w as Figure 2 (b) shows, where each row of w, such as wi represents the similarity between citra and all other trajectory prototypes. Because it is unknown how many environments are in the sampled trajectories, we directly construct environmental prototypes cienv for each trajectory prototype c i tra. Specifically, each environmental prototype c i env is the mean of its corresponding trajectory prototype citra and c i tra’s top k similar trajectory prototypes.
cienv = 1
K ∑ k∈{T i} cktra, (1)
where Ti denotes the index set of the top-K similar trajectory prototypes with citra. In this way, we can obtain the i-th environmental prototypes, but before that, we need to calculate the semantic similarity matrix w.
Normally, we can directly use the euclidean distance to discriminate the similarity between different trajectory prototypes. However, this ignores the semantic effect of trajectory prototypes on dynamics
prediction. If two trajectories prototypes are from a single environment, their trajectory prototypes should share the same semantics, i.e., and their effects on the dynamics function should be the same. Therefore, we consider take account the semantic effect on the dynamics prediction into similarity estimation. However, it is challenging to estimate the effects of the trajectory prototype on the dynamics function because Z is not the only factor that can influence the dynamics function. To remove the effects of other factors, e.g. states and actions, on the dynamics function, our method draws inspiration from the recently proposed RIA method (Guo et al., 2021) to calculate the direct causal effects (CDE) of trajectory prototypes. By controlling all factors that have effects on the dynamics function over a mini-batch, we can solely estimate average CDE between different trajectory prototypes as their semantic difference d. Concretely, we compute d between two trajectory prototypes using a mini-batch of St and At pairs (sit, a i t) as Figure 2 (a) shows:
dij = 1
N N∑ k=1 |CDEcitra,cjtra(s k t , a k t )|, (2)
where N is the batch size, i and j are the id of trajectory prototypes. Please refer to Appendix A.6 for the details of CDE. With semantic difference d, we can convert it as the semantic difference w via w = exp(−dβ ), where β is a factor that controls the sensitivity of w. With the calculated similarity w, we can construct environmental prototypes via Eq. (1).
Next, we will describe how to update the built trajectory and environmental prototypes to ensure that hierarchical prototypes are representative for each trajectory and environment, and how they help learn the context encoder.
3.3 PROTOTYPICAL RELATIONAL LEARNING
As Figure 1 shows, we introduce a context encoder g parameterized by ϕ to estimate environmentalspecific factor ẑit from the past transition segments τt−k:t−1 = {(st−k, at−k), ..., (st−1, at−1)} following previous methods:
ẑit = g(τ i t−k:t−1;ϕ).
In order to learn the context encoder and encourage the estimated environmental-specific factor ẑit to be semantically meaningful, we optimize g via the proposed prototypical relational loss to form a clear cluster for Zs from the same environments. Concretely, we introduce a relational head (Patacchiola & Storkey, 2020) as a learnable function h to derive the environmental-specific estimation ẑit closely surrounded its associated cluster prototypes. To achieve this, we concatenate the ẑit and its assigned prototypes, e.g., citra as the positive pair, and the concatenation of other prototypes are negative pairs. Then we use the relational head h parameterized by φ to quantify the similarity score of ŷ. To increase the similarity score ŷ of positive pairs and decrease those of negatives, we can regard it as a simple binary classification problem to distinguish positive and negative pairs. This can be regarded as maximizing the mutual information between Zs and its corresponding prototypes (Please refer to (Tsai et al., 2020; Guo et al., 2021) and Appendix A.3). However, it neglects the semantic correlation among different prototypes, and so it may excessively penalize some semantically relevant prototypes. To alleviate such over-penalization, we propose to penalize prototypes adaptively with the intervention similarity (Guo et al., 2021) through the following objective:
Li−p−relationφ,ϕ = − 1
N(N − 1) N∑ i=1 N∑ j=1 [ [yi,j + (1− yi,j) · wi,j ] · log h([ẑi, cj ];φ)
+ (1− yi,j) · (1− wi,j) · log (1− h([ẑi, cj ];φ)) ] , (3)
where w ranges from 0 to 1, and we use it as the similarity between different prototypes. In addition, the first term of Eq. (3) clusters zit with prototypes c
j with the similarity weight wi,j , and the second term push them away with weight 1−wi,j . To maintain the hierarchical prototypes structure (Li et al., 2020; Guo et al., 2022), we simultaneously update the context encoder by optimizing the objective Eq. (3) between z with trajectory and environmental prototypes. Specifically, the calculation of similarity wenv between environmental prototypes and z is same with Section 3.2 as . In addition, we also optimize the relation loss among different Zs following (Guo et al., 2021; Li et al., 2020) because Z itself can be regarded as an instance prototype, and thus can retain the property of local smoothness and help bootstrap clustering.
In order to improve its generalization ability on different dynamics, we incorporate the estimated environment-specific ẑt into the dynamics prediction model f̂ and optimize the objective function following (Lee et al., 2020; Seo et al., 2020; Janner et al., 2019):
Lpredθ,ϕ = − 1
N N∑ i=1 log f̂(sit+1|sit, ait, g(τ it−k:t−1;ϕ); θ), (4)
where k is the length of transition segments, t is the current timestep, and N is the sample size. In addition, we also enable the built prototypes to optimize Eq. (4) to ensure that the learned prototypes are semantically meaningful. Overall, our method simultaneously optimize the prediction loss Eq. (4) and prototypical relational loss Eq. (3) with prototypes in different levels to learn context encoder g and semantic meaningful prototypes, which encourage the estimated environmental-specific Ẑ can form clear clusters, and thus can learn a generalizable prediction function f .
3.4 DIFFERENCE TO RIA
Our paper refers the idea of RIA (Guo et al., 2021) to estimate semantic similarities between different prototypes. However, our method differs RIA from three aspects: 1) RIA estimates the semantic similarities between different instance estimation ẑ while our method estimates the semantic similarities between different prototypes. Considering the number of prototypes are limited, the training procedure are faster and more stable than RIA. 2) Our method fully takes the advantage the hierarchy between trajectory and environments, and construct environmental prototype based on trajectory label information while RIA ignores it. Thus our method can achieve better performance than RIA. 3) RIA only pulls ẑ and other estimations with similar semantics, but our prototypical relational learning further pulls the ẑ and its corresponding trajectory ctra and environmental prototypes cenv .
4 EXPERIMENT
In this section, we perform experiments to evaluate the effectiveness of our approach by answering the following questions: 1) Can our method encourages the learned Z to form a clear cluster? (Section 4.2); 2) Can the learned Ẑ with the clear cluster reduce the dynamics prediction errors in model-based RL? (Supplementary Material A.2); 3) Can the learned Ẑ with the clear cluster promote the performance of model-based RL in environments with unseen dynamics? (Section 4.3); 4) Is our method sensitive to hyperparameters? (Section 4.4)
4.1 ENVIRONMENTAL SETUP
Implementation details Our method includes three learnable functions and a set of learnable trajectory prototypes. The learnable functions are context encoder, relational head and prediction head, and they all are constructed with MLP and optimized by Adam (Kingma & Ba, 2014) with 1e-3 learning rate. During the training procedure, the trajectory segments are randomly sampled from the same trajectory to break the temporal correlations of the training data, which was also adopted by (Seo et al., 2020; Guo et al., 2021). Specifically, we combine k = 3 similar trajectory embedding into environmental embedding, and the length of the transition segments is 10, and the hyper-parameters are the same for all experiments, and details can be found in supplementary material A.1. Tasks Following the previous methods (Lee et al., 2020; Seo et al., 2020), we perform experiments on the classic control algorithm (Pendulum) from OpenAI gym (Brockman et al., 2016) and simulated robotics control tasks (HalfCheetah, Swimmer, Ant, Hopper, Slim-Humanoid) from Mujoco physical engine (Todorov et al., 2012). Dynamics settings To construct different dynamics of environments, we change the environmental parameters (e.g., length and mass of Pendulum) and predefine them in the training and test environmental parameters lists following previous methods (Zhou et al., 2019; Packer et al., 2019; Lee et al., 2020; Seo et al., 2020; Guo et al., 2021). Specifically, for the convthe training environmental parameters lists for all tasks are {0.75, 0.8, 0.85, 0.90, 0.95, 1, 1.05, 1.1, 1.15, 1.2, 1.25}, and test environmental parameters lists are {0.2, 0.4, 0.5, 0.7, 1.3, 1.5, 1.6, 1.8}. We can see that the parameters in test list are out of range of the parameters in the training set. At the training time, we randomly sample the parameters from the training parameter list to train our context encoder and dynamics
prediction model. Then we test our model on the environments with unseen dynamics sampled from the test parameter list. All details are given in supplementary material A.1. Planning Following (Lee et al., 2020; Seo et al., 2020), we use the model predictive model (MPC) (Maciejowski, 2002) to select actions based on learned dynamics prediction model, and assume that reward functions are known. In addition, we use the cross-entropy method (CEM) (De Boer et al., 2005) to find the best action sequences. Baselines In this paper, we compare our approach with the following state-of-the-art model-based RL methods on dynamics generalization:
• Context-aware dynamics model (CaDM) (Lee et al., 2020): This method design several auxiliary loss, including backward and future states prediction to learn the context from transition segments.
• Trajectory-wise Multiple Choice Learning (TMCL) (Seo et al., 2020): TMCL introduces multi-choice learning to adapt to different environments. For a fair comparison, we use the no adaptation version of this method.
• Relation Intervention Approach (RIA) (Guo et al., 2021): This method proposes to use relational intervention loss to cluster Zs from the same environments.
It has been clearly evidenced that Probabilistic ensemble dynamics model (PETS) (Kurutach et al., 2018) and Meta learning based model-based RL methods, e.g. Recurrent model ReBAL and hidden-parameter model GrBAL (Nagabandi et al., 2018b;a), perform worse than CaDM (Lee et al., 2020),TMCL (Seo et al., 2020) and RIA (Guo et al., 2021), so we do not consider them as baselines in our paper.
4.2 CLUSTER VISUALIZATION AND ANALYSIS
We perform PCA visualization of estimated Ẑs from baselines and our method as Figure 3 to evaluate the cluster performance of estimated Ẑs. We can see that our method can achieve better cluster performance qualitatively. Specifically, most Ẑs estimated by RIA (Guo et al., 2021) have good cluster performance in general, but the outliers decrease the cluster performance. By contrast, we can
see that there are fewer outliers in our method than them in RIA because the built prototypes and the proposed prototypical relational loss can enforce constraints into estimated Ẑs. More qualitatively cluster comparisons can be found in Supplementary Material A.8.
We also quantitatively evaluate the cluster performance of Ẑs estimated by baselines and our method. Here we firstly perform k-means (MacQueen et al., 1967) on the estimated Ẑs, and then use the ground-truth environmental label to calculate the cluster performance. Here we use the popular mutual information-based metric AMI (Vinh et al., 2010), random-index metric ARI(Hubert & Arabie, 1985) and V-means (Rosenberg & Hirschberg, 2007) as the evaluation metrics. The results are shown in Table 3, we can see that Ẑs estimated by our method achieves the highest cluster performance. More quantitatively cluster comparisons can be found in Supplementary Material A.8.
4.3 PERFORMANCE COMPARISONS
Slim_Humanoid
Hopper
Then, we evaluate the generalization of model-based RL agents trained by our methods and baselines on test environments with unseen dynamics. Following the setting of (Seo et al., 2020), we perform experiments across five runs, and show the test returns on the test environments in Figure 6. Note that the results are slightly different from the results in RIA and TMCL paper since we change the parameter lists that change the environmental dynamics. Specifically, we change the parameter lists of all environments to the same for the convenience of performing environments.
As Figure 6 shows, we can see that our method can achieve significantly better performance than baselines in Ant, Halfcheetah, and Pendulum. Specifically, we can see that our method outperforms the second-best method RIA by 20% in Ant and Halfcheetah environments, which indicates that the changing parameter can largely change their dynamics. In addition, we can see that our method achieves only slightly better performance than baselines in Hopper, Swimmer, and Slim_Humanoid problems. For Hopper and Slim_Humanoid environment, we observe that both RIA and our method can achieve comparable results in all test environments, which indicates that the change of dynamics for Hopper is easy to model and solve. For the Swimmer environment, we observe that TMCL (Seo et al., 2020) sometimes may have a significant performance decline at the final training iteration. This may be because that TMCL may fail to learn the modalities of dynamics function in the no adaptation version. Also, our method still achieves better performance than RIA at the Swimmer task.
4.4 ABLATION STUDY
In this section, we first perform a sensitive analysis of how many trajectory prototypes should be combined into environmental prototypes. The experiments are conducted at the Pendulum task, and the results are shown as the left image of Figure 5, we can see that no matter what k it is, our method consistently outperforms the baseline CaDM (Lee et al., 2020), which indicates that our method is robust to the selection of k value. Specifically, k = 1 means that there are no hierarchical prototypes because one trajectory prototype can decide one environmental prototype, and thus environmental prototypes are the same as trajectory prototypes. We can see that all experimental results with k > 1 are better than the experimental result with k = 1, which shows the effectiveness of our proposed hierarchical prototypes method and the necessity of the built environmental prototypes. The results of k = 1 achieve the best performance on the Pendulum task, so we use it as the default parameter in all experiments.
We also perform an ablation study about the similarity metric used to calculate the similarity among trajectory prototypes. For most cluster methods, e.g. k-means (MacQueen et al., 1967), they usually calculate the similarity among entities using the Euclidean distance, while our method uses the direct causal effect as the similarity metric. To evaluate the effectiveness of the similarity metrics based on direct causal effect (Pearl, 2013), we perform experiments on the Halfcheetah and Pendulum tasks, and we can see that using the causal effect to calculate the similarities among trajectory prototypes can achieve better performance than using Euclidean distance on both tasks.
5 LIMITATION
Our paper only considers the unsupervised dynamics generalization in model-based reinforcement learning, but model-free RL also suffers from this problem, and we will apply our method to modelfree RL in future work. In addition, there are many other generalization problems in reinforcement learning area, e.g. observation generalization (Wang et al., 2020; Kirk et al., 2021; Ghosh et al., 2021) and action generalization (Jain et al., 2020), and it would be interesting to extend our method into other generalization settings and train generalizable agents.
6 CONCLUSION
In this paper, we focus on the unsupervised dynamics generalization problem in model-based reinforcement learning, and propose a hierarchical prototypical method to construct environmental prototypes in an unsupervised manner. With the learned environmental prototypes, we further propose a prototypical relational loss to learn a context encoder to estimate environmental-specific factors from past transition segments, which enables the dynamics prediction function in model-based reinforcement learning to generalize well on environments with unseen dynamics. The experiments demonstrate that our method can form clearer and tighter clusters for Ẑs from the same environment and improve the performance of model-based agents in new environments with unseen dynamics.
7 REPRODUCIBILITY STATEMENT
We acknowledge the importance of reproducibility for research work and try whatever we can to ensure the reproducibility of our work. As for the implementation of our method, details such as hyperparameters are provided in Section 4.1 and Appendix A.1. We will publicly release all codes after the acceptance of this paper.
A APPENDIX
We promise that we will public all codes after the acceptance of this paper and we public all training details at Appendix A.1 and A.3.
A.1 ENVIRONMENTAL SETTINGS
We follow the environmental settings of Lee et al. (2020); Guo et al. (2021) and give the details of settings as follows:
• Pendulum We modify the mass m and the length l of Pendulum to change its dynamics. • Half-Cheetah We modify the mass of rigid link m and the damping of joint d of Half-
Cheetah agent to change its dynamics.
• Swimmer We modify the mass of rigid link m and the damping of joint d of Swimmer agent to change its dynamics.
• Ant We modify the mass of ant’s leg m to change its dynamics. Specifically, we modify two legs by multiplying its original mass with m, and others two with 1m .
• Slim_Humanoid We modify the mass of rigid link m and the dampling of joint d of the Slim_Humanoid agent to change its dynamics.
• Hopper We modify the mass of m of the Hopper agent to change its dynamics.
Specifically, all training and test parameter lists are set as {0.75, 0.8, 0.85, 0.90, 0.95, 1, 1.05, 1.1, 1.15, 1.2, 1.25} and {0.2, 0.4, 0.5, 0.7, 1.3, 1.5, 1.6, 1.8}, respectively.
A.2 ALGORITHM
The training procedure is give at Algorithm 1.
A.3 TRAINING DETAILS
Similar to the Lee et al. (2020); Guo et al. (2021), we train our model-based RL agents and context encoder for 20 epochs, and we collect 10 trajectories by a MPC controller with 30 horizon from environments at each epoch. In addition, the cross entropy method (CEM) with 200 candidate actions is chosen as the planing method. Specifically, the batch size for each experiment is 128, β is 6e-1. All module are learned by a Adam optimizer with 0.001 learning rate.
A.4 PREDICTION ERROR
A.5 NETWORK DETAILS
Similar to the Lee et al. (2020), the context encoder is constructed by a simple 3 hidden-layer MLP, and the output dim of environmental-specific vector ẑ is 10. The relational head is modelled as a single FC layer. The dynamics prediction model is a 4 hidden-layer FC with 200 units.
Algorithm 1 The training algorithm process of our relational intervention approach Initialize parameters of context encoder ϕ, dynamics prediction model θ and relational head φ Initialize dataset B ← ∅ for Each Iteration do
sample environmentsMi from training environments {Mtri }Ki=0 ▷ Collecting Data for T = 1 to TaskHorizon do
Get the estimation of the environment-specified factor ẑit−k:t−1 = g(τ i t−k:t−1;ϕ) Collect (st, at, st+1, rt, τ it−k:t−1) fromMi with dynamics prediction model θ Update B ← B ∪ (st, at, st+1, rt, τ it−k:t−1) Initialize trajectory prototype Citra for each sampled trajectory
end for for Each Dynamics Training Iteration do ▷ Update ϕ,θ and φ
for k = 1 to K do Sample data τ i,b,Pt−k:t−1 , C i tra and τ j,b,P t−k:t−1 , C j tra with batch size B,from B
Get the estimation of the environment-specified factor ẑi,B,Pt−k:t−1 = g(τ i,B,P t−k:t−1;ϕ) and
ẑj,B,Pt−k:t−1 = g(τ j,B,P t−k:t−1;ϕ)
Estimate the similarity w between Citra and C j tra Construct w between Citra and C j tra Combing top_k similar Citra into environmental prototypes C i env Ltot=Lpredϕ,θ (τ i,B,,K t:M , ẑ i,B,P t−k:t−1)+L i−relation ϕ,φ (ẑ i,B,P t−k:t−1, C
i) with prototypes in different levels.
Update θ , ϕ , φ← ∇θ,ϕφ 1BL tot
end for end for
end for
A.6 DIRECT CAUSAL EFFECT BETWEEN TRAJECTORY PROTOTYPES
Concretely, the direct causal effect difference between two trajectory prototypes cjtra and c k tra can be calculated through the controllable causal effect (Pearl, 2013) given as following:
CDE c j tra,c k tra
(st, at) =E[St+1|do(St = st, At = at), do(Z = cjtra)] (5)
− E[St+1|do(St = st, At = at), do(Z = cktra)] (6)
=E[St+1|St = st, At = at, Z = cjtra]− E[St+1|St = st, At = at, Z = c k tra], (7)
where do is the do-calculus (Pearl, 2000). Because we control all variables that can influence on the St+1, and there is no other confounder between the mediators (St, At) and St+1 except Z (Guo et al., 2021), we can remove all do operators in Eq. (6). Therefore, the intervention distribution of controlling Z, and (St, At) Eq. (6) is equal to the conditional distribution Eq. (7). In addition, the direct causal effects between cjtra and c k tra may differ for different values of St and At, so we should sample St and At independently of Z to calculate the average controlled direct effect.
Concretely, we directly use a mini-batch of St and At pairs (sit, a i t) to calculate the average controlled direct effect of them as Figure 2 (a) shows:
wjk = 1
N N∑ i=1 |CDEcjtra,cktra(s i t, a i t)|, (8)
where N is the batch size, j and k are the id of trajectory prototypes.
A.7 CONNECTION BETWEEN RELATION LOSS AND MUTUAL INFORMATION
We denote the environmental-specific factor as Z and its prototypes as C. By definition, the mutual information between Z and C should be:
I(Z;C) = EPZC [log( p(z, c)
p(z)p(c) )] (9)
where PZC is he joint distribution of Z and C, and PZ and PC are their marginal distributions. To estimate mutual information between Z and C, we can use the probabilistic classifier method proposed by Tsai et al. (2020). Concretely, we can use a Bernoulli random variable Y to classify one given data pair (z, c) from the joint distribution PZC (Y = 1) or from the product of marginal distribution P (Z)P (C) (Y = 0) . Therefore, the mutual information I(Z;C) between Z and C can be rewrite as:
I(Z;C) = EPZC [log( p(z, c)
p(z)p(c) )]
= EPZC [log( p(z, c|Y = 1) p(z, c|Y = 0) )]
= EPZC [log( p(Y = 0)P (Y = 1|z, c) p(Y = 1)P (Y = 0|z, c) )] (10)
Obviously, p(Y=0)p(Y=1) can be approximated by the sample size, i.e. nPZPC nPZC , while P (Y=1|z,c)P (Y=0|z,c) can be measured by a classifier h(Y |z, c) with the below our relational loss:
Lrelationφ,ϕ = − [ Y · log h([z, c];φ) + (1− Y ) · log (1− h([z, c];φ)) ] , (11)
where Y = 1 if the given pair (z, c) is from the joint distribution PXY , and Y = 0 if the given pair (z, c) is from the product of the marginal distributions PZPC . Because p(Y=0) p(Y=1) tend to be a constant, optimizing our relation loss is actually estimating the mutual information I(Z;C) between Z and C. Therefore, optimizing Eq. (11) is actually maximizing the mutual information between (ẑ) and its corrsponding prototype which represents the semantics of trajectorey or environment. If the readers are interested in the concrete bound about this method to estimate mutual information, please refer to Tsai et al. (2020); Guo et al. (2021).
A.8 VISUALIZATION AND ANALYSIS | 1. What is the focus and contribution of the paper regarding generalization in model-based RL?
2. What are the strengths and weaknesses of the proposed hierarchical prototypical method?
3. Do you have any concerns or questions regarding the problem setting and its practical applications?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. What are some missing crucial baselines that should be considered for a comprehensive evaluation? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work studies the problem of generalization in (model-based) RL by learning dynamics models that disentangle generalizable factors from environment-specific factors. Specifically, this paper proposes a hierarchical prototypical method (HPM) with the objective of learning to cluster different environments with similar environment-specific factors, thereby facilitating generalization or fast adaptation in new environments by associating it with a learned cluster.
Strengths And Weaknesses
Strengths
The proposed clustering method is interesting and intuitive.
Learning prototypes of environments as a concept is again interesting, intuitive, and to my knowledge novel.
Proposed method shows stronger results than the baselines considered.
Weaknesses
The authors do not provide a clear motivation for the problem setting. In practice, how is it possible to generate trajectories from a large collection of environments with different factors? What are some practical applications that have these characteristics?
In my mind, the most plausible way to generate trajectories from diverse environments is through simulation. In this case, there are several sim2real papers that also attempt to learn from simulation and generalize to real world. Such lines of work have not been discussed in related work.
In Fig 4, what would be the asymptotic performance of an oracle RL agent trained on the unseen test environment. Without this information, it is unclear how good is the performance of proposed method in absolute terms (i.e. relative to an oracle).
I feel several crucial baselines are missing. Examples are
(a) Methods that perform some form of domain randomization, like EPOpt[1], that show that a single policy can be successful despite variations in the environment on several of the tasks considered in this paper.
(b) A simple recurrent policy [2] which has been recently shown to be competitive with methods that explicitly perform adaptation.
References
[1] Rajeswaran et al. 2016. EPOpt: Learning Robust Neural Network Policies Using Model Ensembles.
[2] Ni et al. 2021. Recurrent Model-Free RL Can Be a Strong Baseline for Many POMDPs.
Clarity, Quality, Novelty And Reproducibility
See above section. Several lines of work are missing in related work, and important baselines have not been considered. |
ICLR | Title
Hierarchical Prototypes for Unsupervised Dynamics Generalization in Model-Based Reinforcement Learning
Abstract
Generalization remains a central challenge in model-based reinforcement learning. Recent works attempt to model the environment-specific factor and incorporate it as part of the dynamic prediction to enable generalization to different contexts. By estimating environment-specific factors from historical transitions, earlier research was unable to clearly distinguish environment-specific factors from different environments, resulting in poor performance. To address this issue, we introduce a set of environment prototypes to represent the environmental-specified representation for each environment. By encouraging learned environment-specific factors to resemble their assigned environmental prototypes more closely, the discrimination of factors between different environments will be enhanced. To learn such prototypes in the unsupervised manner, we propose a hierarchical prototypical method which first builds trajectory embeddings according to the trajectory label information, and then hierarchically constructs environmental prototypes from trajectory prototypes sharing similar semantics. Experiments demonstrate that environment-specific factors estimated by our method have superior clustering performance and can improve MBRL’s generalisation performance in six environments consistently.
1 INTRODUCTION
Reinforcement learning (RL) has achieved great success in solving sequential decision-making problems, e.g., board games (Silver et al., 2016; 2017; Schrittwieser et al., 2020), computer games (Mnih et al., 2013; Silver et al., 2018; Vinyals et al., 2019), and robotics (Levine & Abbeel, 2014; Bousmalis et al., 2018), but it still suffers from the low sample efficiency problem, making it challenging to solve real-world problems, especially for those with limited or expensive data (Gottesman et al., 2018; Lu et al., 2018; 2020; Kiran et al., 2020).In contrast, model-based reinforcement learning (MBRL) (Janner et al., 2019; Kaiser et al., 2019; Schrittwieser et al., 2020; Zhang et al., 2019; van Hasselt et al., 2019; Hafner et al., 2019b;a; Lenz et al., 2015) has recently received wider attention, because it explicitly builds a predictive model and can generate samples for learning RL policy to alleviate the sample inefficiency problem.
As a sample-efficient alternative, the model-based RL method derives a policy from the learned environmental dynamics prediction model. Therefore, the dynamics model’s prediction accuracy is highly correlated with policy quality (Janner et al., 2019). However, it has been evidenced that the learned dynamics prediction model is not robust to the change of environmental dynamics (Lee et al., 2020; Seo et al., 2020; Guo et al., 2021), and thus the agent in model-based RL algorithms has a poor generalization ability on the environments with different dynamics. Such a vulnerability to the change in environmental dynamics makes model-based RL methods unreliable in real-world applications where the factors that can affect dynamics are partially observed. For example, the friction coefficient of the ground is usually difficult to measure, while the changes in it can largely affect the dynamics when controlling a robot walking on the grounds, leading to the performance degradation of an agent trained by model-based RL methods (Yang et al., 2019; Gu et al., 2017; Nagabandi et al., 2018b).
Recent Studies (Seo et al., 2020; Nagabandi et al., 2018a; Lee et al., 2020; Guo et al., 2021) have demonstrated that incorporating environmental factor Z into dynamics prediction facilitates the generalisation of model-based RL methods to unseen environments. However, environmental factors
are unobservable in the majority of applications; for instance, the friction coefficient is not available for robots. Therefore, estimating semantical meaningful Z for each environments is the first step for generalization of model-based RL. However, it is not easy to implement, because the environment is hard to label. For example, it is impractical to measure the friction coefficient of every road. Without the label information of environments, Zs estimated from previous methods (Seo et al., 2020; Nagabandi et al., 2018a; Lee et al., 2020; Guo et al., 2021) cannot form clear clusters for different environments as Figure 3 shows. These entangled Zs cannot represent the distinct environmental specific information, and thus may deviate the learned dynamics prediction function from the true one, resulting in the poor generalization ability.
In this paper, we propose a hierarchical prototypical method (HPM) with the objective of learning an environment-specific representation with distinct clusters. By representing environment-specific information semantically meaningfully, HPM learns more generalizable dynamics prediction function. To achieve this, our method propose to construct a set of environmental prototypes to capture environment-specific information for each environment. By enforcing the estimated Ẑ to be more similar to its respective environmental prototypes and dissimilar to other prototypes, the estimated Ẑs can form compact clusters for the purpose of learning a generalizable dynamics prediction function. Because environmental labels are not available, we cannot construct environmental prototypes directly. To address this issue, we begin by developing easily-learned trajectory prototypes based on the trajectory label. Then, environmental prototypes can be created by merging trajectory prototypes with similar semantics, as suggested by the natural hierarchical relationship between trajectory and environment.
With the built hierarchical prototypical structure, we further propose a prototypical relational loss to learn Z from past transitions. Specifically, we not only aggregate the Ẑs with similar causal effects by optimizing the relational loss (Guo et al., 2021) but also aggregate Ẑ with its corresponding trajectory and environmental prototypes via the relational loss. In addition, to alleviate the over-penalization of semantically similar prototypes, we propose to penalize prototypes adaptively with the intervention similarity. In the experiments, we evaluate our method on a range of tasks in OpenAI gym (Brockman et al., 2016) and Mujoco (Todorov et al., 2012). The experimental results show that our method can form more clear and tighter clusters for Ẑs, and such Ẑs can improve the generalization ability of model-based RL methods and achieve state-of-art performance in new environments with different dynamics without any adaptation step.
2 RELATED WORK
Model-based reinforcement learning With the learned dynamics prediction model, Model-based Reinforcement Learning (MBRL) takes advantage of high data efficiency. The learned prediction model can generate samples for training policy (Du & Narasimhan, 2019; Whitney et al., 2019) or planning ahead in the inference (Atkeson & Santamaria, 1997; Lenz et al., 2015; Tassa et al., 2012). Therefore, the performance of MBRL highly relies on the prediction accuracy of the dynamics predictive model. To improve the predictive model’s accuracy of MBRL, several methods were proposed, such as ensemble methods (Chua et al., 2018), latent dynamics model (Hafner et al., 2019b;a; Schrittwieser et al., 2020), and bidirectional prediction (Lai et al., 2020). However, current predictive methods are still hard to generalize well on unseen dynamics, which hinders the application of MBRL methods in the real-world problems.
Dynamics generalization in model-based reinforcement learning To adapt the MBRL to unknown dynamics, meta-learning methods (Nagabandi et al., 2018a;b; Sæmundsson et al., 2018) attempted to update model parameters by updating a small number of gradient updates (Finn et al., 2017) or hidden representations of a recurrent model (Doshi-Velez & Konidaris, 2016). Then, using multi-choice learning, (Lee et al., 2020; Seo et al., 2020) attempted to learn a generalised dynamics model by incorporating environmental-specified information or clustering dynamics implicitly, with the goal of adapting any dynamics without training. Through relational learning and causal effect estimation, RIA (Guo et al., 2021) aims to explicitly learn meaningful environmental-specific information. However, the dynamics change learned by RIA still suffer from a high variance issue.
Prototypical methods By learning an encoder to embed data in a low-dimensional representation space, prototypical methods gain a set of prototypical embeddings, which are referred to as
prototypes (Asano et al., 2020; Caron et al., 2020b) that form the basis of this representation space. Prototypical methods aim to derive compact data representations gathering around corresponding prototypes (Li et al., 2021; Oord et al., 2018; Wang et al., 2021), which captures some basic semantic structures. Therefore, prototypical methods have been applied into many areas, e.g. self-supervised learning (Li et al., 2020; Caron et al., 2020a), few-shot learning (Snell et al., 2017; Bateni et al., 2020; Simon et al., 2020), domain adaptation (Tanwisuth et al., 2021) and continue learning (De Lange & Tuytelaars, 2021; Yu et al., 2020). In the RL area, (Yarats et al., 2021) ties representation learning with exploration through prototypical representations for image-based RL, while our method focuses on the unsupervised dynamics generalization problem in model-based RL, aiming to learn semantical meaningful dynamics change using prototypical method. Specifically, our method propose a hierarchical method to construct environmental prototypes from trajectory prototypes.
3 METHODS
In this section, we first introduce the formulation of the unsupervised dynamic generalization problem in model-based reinforcement learning. Then we present the details of how our hierarchical prototype method learns the environment-specific factors.
3.1 PROBLEM SETUP
We formulate the standard reinforcement learning as a markov decision process (MDP) M = (S,A, r, f, γ, ρ0) over discrete time (Puterman, 2014; Sutton & Barto, 2018), where S , A, γ ∈ (0, 1] and ρ0 are state space, action space, the reward discount factor, and the initial state distribution, respectively. Dynamics function f : S ×A → S gives the next state st+1 conditioned on the current state st and action at, and reward function r : S × A → R specifies the reward at each timestep t given st and at. The goal of RL is to learn a policy π(·|s) mapping from state s ∈ S over the action distribution to maximize the cumulative expected return Est∈S,at∈A[ ∑∞ t=0 γ
t r(st, at)] over timesteps. In model-based RL, we aim to learn a prediction model f̂ to approximate the dynamics function f , and then f̂ can generate training data to train policy π or predict the future sequences for planning. With the data provided by learned dynamics model f̂ , model-based RL has higher data efficiency and better planing ability compared with model-free RL.
In this paper, we consider the unsupervised dynamics generalization problem in model-based RL. Different from the standard reinforcement learning, there exists an unobserved variable Z that can affect the dynamics prediction function f in the dynamics generalization problem. The goal of dynamics generalization is to derive a generalizable policy from given K training MDPs {Mtri }Ki=0, and expect the policy can generalize well on L test MDPs {Mtej }Lj=0. Without losing generality, we assume all MDPs share the same state and action space but preserve different factor Z.
In the context of model-based reinforcement learning, we need to learn the dynamics function before learning policy. In order to generalize the dynamic functions on different environment, we need to incorporate the unobserved variable Z into dynamics prediction process, i.e., extending the dynamics function from f : S ×A → S to f : S ×A × Z → S . Since Z is not available, we should estimate it from past transition segments τt−k:t−1 = {(st−k, at−k), ..., (st−1, at−1)} (Seo et al., 2020; Lee et al., 2020; Guo et al., 2021).
Next, we will present how our hierarchical prototypes method estimates Z, and enable it to learn the dynamics function f that can generalize to environments with unseen dynamics. In Section 3.2, we present how our method hierarchically constructs prototypes as a representative embedding to represent environmental-specific information for each environment. In Section 3.3, we describe how we update prototypes dynamically and how to estimate environmental-specific factors Z from past transition segments using prototypes. Once Z are estimated, we describe how they enable dynamics function f to generalize well environments with different dynamics.
3.2 HIERARCHICAL ENVIRONMENT PROTOTYPES CONSTRUCTION
The objective of our method is to construct a set of prototypes to represent the environmental-specific information for each environment, and guide the context encoder to estimate environmental-specific variable Z from historical transition segments. In each training iteration, we randomly sample a trajectory from a subset of MDPs in the training MDPs. Because labels of MDPs are not available, we cannot estimate environmental prototypes directly. Furtunately, we still have the trajectory label information, and thus we can construct the prototypes for each sampled trajectory first. Specifically, we denote the prototype for j-th trajectory as cjtra. Because different trajectories may be sampled from a single environment, the trajectory prototypes from the same environment should share similar semantics for dynamics prediction. Therefore, we can construct environmental prototypes hierarchically from trajectory prototypes sharing similar semantics. In this way, environmental prototypes and trajectory prototypes form a natural hierarchical structure, and environmental prototypes can be constructed utilising trajectory label information even if no environmental label is available.
If we denote the wi,jtra as the semantical similarity between the trajectory prototypes c i tra and c j tra, we can construct a trajectory similarity matrix w as Figure 2 (b) shows, where each row of w, such as wi represents the similarity between citra and all other trajectory prototypes. Because it is unknown how many environments are in the sampled trajectories, we directly construct environmental prototypes cienv for each trajectory prototype c i tra. Specifically, each environmental prototype c i env is the mean of its corresponding trajectory prototype citra and c i tra’s top k similar trajectory prototypes.
cienv = 1
K ∑ k∈{T i} cktra, (1)
where Ti denotes the index set of the top-K similar trajectory prototypes with citra. In this way, we can obtain the i-th environmental prototypes, but before that, we need to calculate the semantic similarity matrix w.
Normally, we can directly use the euclidean distance to discriminate the similarity between different trajectory prototypes. However, this ignores the semantic effect of trajectory prototypes on dynamics
prediction. If two trajectories prototypes are from a single environment, their trajectory prototypes should share the same semantics, i.e., and their effects on the dynamics function should be the same. Therefore, we consider take account the semantic effect on the dynamics prediction into similarity estimation. However, it is challenging to estimate the effects of the trajectory prototype on the dynamics function because Z is not the only factor that can influence the dynamics function. To remove the effects of other factors, e.g. states and actions, on the dynamics function, our method draws inspiration from the recently proposed RIA method (Guo et al., 2021) to calculate the direct causal effects (CDE) of trajectory prototypes. By controlling all factors that have effects on the dynamics function over a mini-batch, we can solely estimate average CDE between different trajectory prototypes as their semantic difference d. Concretely, we compute d between two trajectory prototypes using a mini-batch of St and At pairs (sit, a i t) as Figure 2 (a) shows:
dij = 1
N N∑ k=1 |CDEcitra,cjtra(s k t , a k t )|, (2)
where N is the batch size, i and j are the id of trajectory prototypes. Please refer to Appendix A.6 for the details of CDE. With semantic difference d, we can convert it as the semantic difference w via w = exp(−dβ ), where β is a factor that controls the sensitivity of w. With the calculated similarity w, we can construct environmental prototypes via Eq. (1).
Next, we will describe how to update the built trajectory and environmental prototypes to ensure that hierarchical prototypes are representative for each trajectory and environment, and how they help learn the context encoder.
3.3 PROTOTYPICAL RELATIONAL LEARNING
As Figure 1 shows, we introduce a context encoder g parameterized by ϕ to estimate environmentalspecific factor ẑit from the past transition segments τt−k:t−1 = {(st−k, at−k), ..., (st−1, at−1)} following previous methods:
ẑit = g(τ i t−k:t−1;ϕ).
In order to learn the context encoder and encourage the estimated environmental-specific factor ẑit to be semantically meaningful, we optimize g via the proposed prototypical relational loss to form a clear cluster for Zs from the same environments. Concretely, we introduce a relational head (Patacchiola & Storkey, 2020) as a learnable function h to derive the environmental-specific estimation ẑit closely surrounded its associated cluster prototypes. To achieve this, we concatenate the ẑit and its assigned prototypes, e.g., citra as the positive pair, and the concatenation of other prototypes are negative pairs. Then we use the relational head h parameterized by φ to quantify the similarity score of ŷ. To increase the similarity score ŷ of positive pairs and decrease those of negatives, we can regard it as a simple binary classification problem to distinguish positive and negative pairs. This can be regarded as maximizing the mutual information between Zs and its corresponding prototypes (Please refer to (Tsai et al., 2020; Guo et al., 2021) and Appendix A.3). However, it neglects the semantic correlation among different prototypes, and so it may excessively penalize some semantically relevant prototypes. To alleviate such over-penalization, we propose to penalize prototypes adaptively with the intervention similarity (Guo et al., 2021) through the following objective:
Li−p−relationφ,ϕ = − 1
N(N − 1) N∑ i=1 N∑ j=1 [ [yi,j + (1− yi,j) · wi,j ] · log h([ẑi, cj ];φ)
+ (1− yi,j) · (1− wi,j) · log (1− h([ẑi, cj ];φ)) ] , (3)
where w ranges from 0 to 1, and we use it as the similarity between different prototypes. In addition, the first term of Eq. (3) clusters zit with prototypes c
j with the similarity weight wi,j , and the second term push them away with weight 1−wi,j . To maintain the hierarchical prototypes structure (Li et al., 2020; Guo et al., 2022), we simultaneously update the context encoder by optimizing the objective Eq. (3) between z with trajectory and environmental prototypes. Specifically, the calculation of similarity wenv between environmental prototypes and z is same with Section 3.2 as . In addition, we also optimize the relation loss among different Zs following (Guo et al., 2021; Li et al., 2020) because Z itself can be regarded as an instance prototype, and thus can retain the property of local smoothness and help bootstrap clustering.
In order to improve its generalization ability on different dynamics, we incorporate the estimated environment-specific ẑt into the dynamics prediction model f̂ and optimize the objective function following (Lee et al., 2020; Seo et al., 2020; Janner et al., 2019):
Lpredθ,ϕ = − 1
N N∑ i=1 log f̂(sit+1|sit, ait, g(τ it−k:t−1;ϕ); θ), (4)
where k is the length of transition segments, t is the current timestep, and N is the sample size. In addition, we also enable the built prototypes to optimize Eq. (4) to ensure that the learned prototypes are semantically meaningful. Overall, our method simultaneously optimize the prediction loss Eq. (4) and prototypical relational loss Eq. (3) with prototypes in different levels to learn context encoder g and semantic meaningful prototypes, which encourage the estimated environmental-specific Ẑ can form clear clusters, and thus can learn a generalizable prediction function f .
3.4 DIFFERENCE TO RIA
Our paper refers the idea of RIA (Guo et al., 2021) to estimate semantic similarities between different prototypes. However, our method differs RIA from three aspects: 1) RIA estimates the semantic similarities between different instance estimation ẑ while our method estimates the semantic similarities between different prototypes. Considering the number of prototypes are limited, the training procedure are faster and more stable than RIA. 2) Our method fully takes the advantage the hierarchy between trajectory and environments, and construct environmental prototype based on trajectory label information while RIA ignores it. Thus our method can achieve better performance than RIA. 3) RIA only pulls ẑ and other estimations with similar semantics, but our prototypical relational learning further pulls the ẑ and its corresponding trajectory ctra and environmental prototypes cenv .
4 EXPERIMENT
In this section, we perform experiments to evaluate the effectiveness of our approach by answering the following questions: 1) Can our method encourages the learned Z to form a clear cluster? (Section 4.2); 2) Can the learned Ẑ with the clear cluster reduce the dynamics prediction errors in model-based RL? (Supplementary Material A.2); 3) Can the learned Ẑ with the clear cluster promote the performance of model-based RL in environments with unseen dynamics? (Section 4.3); 4) Is our method sensitive to hyperparameters? (Section 4.4)
4.1 ENVIRONMENTAL SETUP
Implementation details Our method includes three learnable functions and a set of learnable trajectory prototypes. The learnable functions are context encoder, relational head and prediction head, and they all are constructed with MLP and optimized by Adam (Kingma & Ba, 2014) with 1e-3 learning rate. During the training procedure, the trajectory segments are randomly sampled from the same trajectory to break the temporal correlations of the training data, which was also adopted by (Seo et al., 2020; Guo et al., 2021). Specifically, we combine k = 3 similar trajectory embedding into environmental embedding, and the length of the transition segments is 10, and the hyper-parameters are the same for all experiments, and details can be found in supplementary material A.1. Tasks Following the previous methods (Lee et al., 2020; Seo et al., 2020), we perform experiments on the classic control algorithm (Pendulum) from OpenAI gym (Brockman et al., 2016) and simulated robotics control tasks (HalfCheetah, Swimmer, Ant, Hopper, Slim-Humanoid) from Mujoco physical engine (Todorov et al., 2012). Dynamics settings To construct different dynamics of environments, we change the environmental parameters (e.g., length and mass of Pendulum) and predefine them in the training and test environmental parameters lists following previous methods (Zhou et al., 2019; Packer et al., 2019; Lee et al., 2020; Seo et al., 2020; Guo et al., 2021). Specifically, for the convthe training environmental parameters lists for all tasks are {0.75, 0.8, 0.85, 0.90, 0.95, 1, 1.05, 1.1, 1.15, 1.2, 1.25}, and test environmental parameters lists are {0.2, 0.4, 0.5, 0.7, 1.3, 1.5, 1.6, 1.8}. We can see that the parameters in test list are out of range of the parameters in the training set. At the training time, we randomly sample the parameters from the training parameter list to train our context encoder and dynamics
prediction model. Then we test our model on the environments with unseen dynamics sampled from the test parameter list. All details are given in supplementary material A.1. Planning Following (Lee et al., 2020; Seo et al., 2020), we use the model predictive model (MPC) (Maciejowski, 2002) to select actions based on learned dynamics prediction model, and assume that reward functions are known. In addition, we use the cross-entropy method (CEM) (De Boer et al., 2005) to find the best action sequences. Baselines In this paper, we compare our approach with the following state-of-the-art model-based RL methods on dynamics generalization:
• Context-aware dynamics model (CaDM) (Lee et al., 2020): This method design several auxiliary loss, including backward and future states prediction to learn the context from transition segments.
• Trajectory-wise Multiple Choice Learning (TMCL) (Seo et al., 2020): TMCL introduces multi-choice learning to adapt to different environments. For a fair comparison, we use the no adaptation version of this method.
• Relation Intervention Approach (RIA) (Guo et al., 2021): This method proposes to use relational intervention loss to cluster Zs from the same environments.
It has been clearly evidenced that Probabilistic ensemble dynamics model (PETS) (Kurutach et al., 2018) and Meta learning based model-based RL methods, e.g. Recurrent model ReBAL and hidden-parameter model GrBAL (Nagabandi et al., 2018b;a), perform worse than CaDM (Lee et al., 2020),TMCL (Seo et al., 2020) and RIA (Guo et al., 2021), so we do not consider them as baselines in our paper.
4.2 CLUSTER VISUALIZATION AND ANALYSIS
We perform PCA visualization of estimated Ẑs from baselines and our method as Figure 3 to evaluate the cluster performance of estimated Ẑs. We can see that our method can achieve better cluster performance qualitatively. Specifically, most Ẑs estimated by RIA (Guo et al., 2021) have good cluster performance in general, but the outliers decrease the cluster performance. By contrast, we can
see that there are fewer outliers in our method than them in RIA because the built prototypes and the proposed prototypical relational loss can enforce constraints into estimated Ẑs. More qualitatively cluster comparisons can be found in Supplementary Material A.8.
We also quantitatively evaluate the cluster performance of Ẑs estimated by baselines and our method. Here we firstly perform k-means (MacQueen et al., 1967) on the estimated Ẑs, and then use the ground-truth environmental label to calculate the cluster performance. Here we use the popular mutual information-based metric AMI (Vinh et al., 2010), random-index metric ARI(Hubert & Arabie, 1985) and V-means (Rosenberg & Hirschberg, 2007) as the evaluation metrics. The results are shown in Table 3, we can see that Ẑs estimated by our method achieves the highest cluster performance. More quantitatively cluster comparisons can be found in Supplementary Material A.8.
4.3 PERFORMANCE COMPARISONS
Slim_Humanoid
Hopper
Then, we evaluate the generalization of model-based RL agents trained by our methods and baselines on test environments with unseen dynamics. Following the setting of (Seo et al., 2020), we perform experiments across five runs, and show the test returns on the test environments in Figure 6. Note that the results are slightly different from the results in RIA and TMCL paper since we change the parameter lists that change the environmental dynamics. Specifically, we change the parameter lists of all environments to the same for the convenience of performing environments.
As Figure 6 shows, we can see that our method can achieve significantly better performance than baselines in Ant, Halfcheetah, and Pendulum. Specifically, we can see that our method outperforms the second-best method RIA by 20% in Ant and Halfcheetah environments, which indicates that the changing parameter can largely change their dynamics. In addition, we can see that our method achieves only slightly better performance than baselines in Hopper, Swimmer, and Slim_Humanoid problems. For Hopper and Slim_Humanoid environment, we observe that both RIA and our method can achieve comparable results in all test environments, which indicates that the change of dynamics for Hopper is easy to model and solve. For the Swimmer environment, we observe that TMCL (Seo et al., 2020) sometimes may have a significant performance decline at the final training iteration. This may be because that TMCL may fail to learn the modalities of dynamics function in the no adaptation version. Also, our method still achieves better performance than RIA at the Swimmer task.
4.4 ABLATION STUDY
In this section, we first perform a sensitive analysis of how many trajectory prototypes should be combined into environmental prototypes. The experiments are conducted at the Pendulum task, and the results are shown as the left image of Figure 5, we can see that no matter what k it is, our method consistently outperforms the baseline CaDM (Lee et al., 2020), which indicates that our method is robust to the selection of k value. Specifically, k = 1 means that there are no hierarchical prototypes because one trajectory prototype can decide one environmental prototype, and thus environmental prototypes are the same as trajectory prototypes. We can see that all experimental results with k > 1 are better than the experimental result with k = 1, which shows the effectiveness of our proposed hierarchical prototypes method and the necessity of the built environmental prototypes. The results of k = 1 achieve the best performance on the Pendulum task, so we use it as the default parameter in all experiments.
We also perform an ablation study about the similarity metric used to calculate the similarity among trajectory prototypes. For most cluster methods, e.g. k-means (MacQueen et al., 1967), they usually calculate the similarity among entities using the Euclidean distance, while our method uses the direct causal effect as the similarity metric. To evaluate the effectiveness of the similarity metrics based on direct causal effect (Pearl, 2013), we perform experiments on the Halfcheetah and Pendulum tasks, and we can see that using the causal effect to calculate the similarities among trajectory prototypes can achieve better performance than using Euclidean distance on both tasks.
5 LIMITATION
Our paper only considers the unsupervised dynamics generalization in model-based reinforcement learning, but model-free RL also suffers from this problem, and we will apply our method to modelfree RL in future work. In addition, there are many other generalization problems in reinforcement learning area, e.g. observation generalization (Wang et al., 2020; Kirk et al., 2021; Ghosh et al., 2021) and action generalization (Jain et al., 2020), and it would be interesting to extend our method into other generalization settings and train generalizable agents.
6 CONCLUSION
In this paper, we focus on the unsupervised dynamics generalization problem in model-based reinforcement learning, and propose a hierarchical prototypical method to construct environmental prototypes in an unsupervised manner. With the learned environmental prototypes, we further propose a prototypical relational loss to learn a context encoder to estimate environmental-specific factors from past transition segments, which enables the dynamics prediction function in model-based reinforcement learning to generalize well on environments with unseen dynamics. The experiments demonstrate that our method can form clearer and tighter clusters for Ẑs from the same environment and improve the performance of model-based agents in new environments with unseen dynamics.
7 REPRODUCIBILITY STATEMENT
We acknowledge the importance of reproducibility for research work and try whatever we can to ensure the reproducibility of our work. As for the implementation of our method, details such as hyperparameters are provided in Section 4.1 and Appendix A.1. We will publicly release all codes after the acceptance of this paper.
A APPENDIX
We promise that we will public all codes after the acceptance of this paper and we public all training details at Appendix A.1 and A.3.
A.1 ENVIRONMENTAL SETTINGS
We follow the environmental settings of Lee et al. (2020); Guo et al. (2021) and give the details of settings as follows:
• Pendulum We modify the mass m and the length l of Pendulum to change its dynamics. • Half-Cheetah We modify the mass of rigid link m and the damping of joint d of Half-
Cheetah agent to change its dynamics.
• Swimmer We modify the mass of rigid link m and the damping of joint d of Swimmer agent to change its dynamics.
• Ant We modify the mass of ant’s leg m to change its dynamics. Specifically, we modify two legs by multiplying its original mass with m, and others two with 1m .
• Slim_Humanoid We modify the mass of rigid link m and the dampling of joint d of the Slim_Humanoid agent to change its dynamics.
• Hopper We modify the mass of m of the Hopper agent to change its dynamics.
Specifically, all training and test parameter lists are set as {0.75, 0.8, 0.85, 0.90, 0.95, 1, 1.05, 1.1, 1.15, 1.2, 1.25} and {0.2, 0.4, 0.5, 0.7, 1.3, 1.5, 1.6, 1.8}, respectively.
A.2 ALGORITHM
The training procedure is give at Algorithm 1.
A.3 TRAINING DETAILS
Similar to the Lee et al. (2020); Guo et al. (2021), we train our model-based RL agents and context encoder for 20 epochs, and we collect 10 trajectories by a MPC controller with 30 horizon from environments at each epoch. In addition, the cross entropy method (CEM) with 200 candidate actions is chosen as the planing method. Specifically, the batch size for each experiment is 128, β is 6e-1. All module are learned by a Adam optimizer with 0.001 learning rate.
A.4 PREDICTION ERROR
A.5 NETWORK DETAILS
Similar to the Lee et al. (2020), the context encoder is constructed by a simple 3 hidden-layer MLP, and the output dim of environmental-specific vector ẑ is 10. The relational head is modelled as a single FC layer. The dynamics prediction model is a 4 hidden-layer FC with 200 units.
Algorithm 1 The training algorithm process of our relational intervention approach Initialize parameters of context encoder ϕ, dynamics prediction model θ and relational head φ Initialize dataset B ← ∅ for Each Iteration do
sample environmentsMi from training environments {Mtri }Ki=0 ▷ Collecting Data for T = 1 to TaskHorizon do
Get the estimation of the environment-specified factor ẑit−k:t−1 = g(τ i t−k:t−1;ϕ) Collect (st, at, st+1, rt, τ it−k:t−1) fromMi with dynamics prediction model θ Update B ← B ∪ (st, at, st+1, rt, τ it−k:t−1) Initialize trajectory prototype Citra for each sampled trajectory
end for for Each Dynamics Training Iteration do ▷ Update ϕ,θ and φ
for k = 1 to K do Sample data τ i,b,Pt−k:t−1 , C i tra and τ j,b,P t−k:t−1 , C j tra with batch size B,from B
Get the estimation of the environment-specified factor ẑi,B,Pt−k:t−1 = g(τ i,B,P t−k:t−1;ϕ) and
ẑj,B,Pt−k:t−1 = g(τ j,B,P t−k:t−1;ϕ)
Estimate the similarity w between Citra and C j tra Construct w between Citra and C j tra Combing top_k similar Citra into environmental prototypes C i env Ltot=Lpredϕ,θ (τ i,B,,K t:M , ẑ i,B,P t−k:t−1)+L i−relation ϕ,φ (ẑ i,B,P t−k:t−1, C
i) with prototypes in different levels.
Update θ , ϕ , φ← ∇θ,ϕφ 1BL tot
end for end for
end for
A.6 DIRECT CAUSAL EFFECT BETWEEN TRAJECTORY PROTOTYPES
Concretely, the direct causal effect difference between two trajectory prototypes cjtra and c k tra can be calculated through the controllable causal effect (Pearl, 2013) given as following:
CDE c j tra,c k tra
(st, at) =E[St+1|do(St = st, At = at), do(Z = cjtra)] (5)
− E[St+1|do(St = st, At = at), do(Z = cktra)] (6)
=E[St+1|St = st, At = at, Z = cjtra]− E[St+1|St = st, At = at, Z = c k tra], (7)
where do is the do-calculus (Pearl, 2000). Because we control all variables that can influence on the St+1, and there is no other confounder between the mediators (St, At) and St+1 except Z (Guo et al., 2021), we can remove all do operators in Eq. (6). Therefore, the intervention distribution of controlling Z, and (St, At) Eq. (6) is equal to the conditional distribution Eq. (7). In addition, the direct causal effects between cjtra and c k tra may differ for different values of St and At, so we should sample St and At independently of Z to calculate the average controlled direct effect.
Concretely, we directly use a mini-batch of St and At pairs (sit, a i t) to calculate the average controlled direct effect of them as Figure 2 (a) shows:
wjk = 1
N N∑ i=1 |CDEcjtra,cktra(s i t, a i t)|, (8)
where N is the batch size, j and k are the id of trajectory prototypes.
A.7 CONNECTION BETWEEN RELATION LOSS AND MUTUAL INFORMATION
We denote the environmental-specific factor as Z and its prototypes as C. By definition, the mutual information between Z and C should be:
I(Z;C) = EPZC [log( p(z, c)
p(z)p(c) )] (9)
where PZC is he joint distribution of Z and C, and PZ and PC are their marginal distributions. To estimate mutual information between Z and C, we can use the probabilistic classifier method proposed by Tsai et al. (2020). Concretely, we can use a Bernoulli random variable Y to classify one given data pair (z, c) from the joint distribution PZC (Y = 1) or from the product of marginal distribution P (Z)P (C) (Y = 0) . Therefore, the mutual information I(Z;C) between Z and C can be rewrite as:
I(Z;C) = EPZC [log( p(z, c)
p(z)p(c) )]
= EPZC [log( p(z, c|Y = 1) p(z, c|Y = 0) )]
= EPZC [log( p(Y = 0)P (Y = 1|z, c) p(Y = 1)P (Y = 0|z, c) )] (10)
Obviously, p(Y=0)p(Y=1) can be approximated by the sample size, i.e. nPZPC nPZC , while P (Y=1|z,c)P (Y=0|z,c) can be measured by a classifier h(Y |z, c) with the below our relational loss:
Lrelationφ,ϕ = − [ Y · log h([z, c];φ) + (1− Y ) · log (1− h([z, c];φ)) ] , (11)
where Y = 1 if the given pair (z, c) is from the joint distribution PXY , and Y = 0 if the given pair (z, c) is from the product of the marginal distributions PZPC . Because p(Y=0) p(Y=1) tend to be a constant, optimizing our relation loss is actually estimating the mutual information I(Z;C) between Z and C. Therefore, optimizing Eq. (11) is actually maximizing the mutual information between (ẑ) and its corrsponding prototype which represents the semantics of trajectorey or environment. If the readers are interested in the concrete bound about this method to estimate mutual information, please refer to Tsai et al. (2020); Guo et al. (2021).
A.8 VISUALIZATION AND ANALYSIS | 1. What is the main contribution of the paper regarding model-based reinforcement learning?
2. How does the proposed approach estimate latent environmental factors, and how does it improve upon prior works such as Guo et al.?
3. What are some clarity issues in the paper, particularly in Sections 3.2 and 3.4, and how can they be addressed?
4. How does the proposed method differ from Guo et al. in terms of its application of the losses proposed by Guo et al.?
5. Can you provide more context on what a trajectory label is and how it relates to the environment labels?
6. How does the proposed method optimize Eq. 4 to ensure learned prototypes are semantically meaningful?
7. What is the y-axis on Figure 5 (left)?
8. Can you explain the difference between "no adaptation" version of TCML and the regular version, and why the former is considered a fairer comparison?
9. Are there any small details that need fixing in the paper, such as grammar or spelling errors?
10. How does the paper's results show that the learned clusters achieve good separation and closely match the ground truth? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper is designed to address the issue of model-based reinforcement learning generalizing to new environments with slightly different transition dynamics. Specifically, the case where unobserved environment factors like the friction coefficient of the floor can differ between the training environments and the test environments. The proposed approach estimates latent environmental factors by clustering together trajectories from different environments. The results show that the learned clusters achieve good separation and more closely match the ground truth, and that the technique matches or exceeds the performance of closely related baselines in OpenAI Gym and Mujoco environments.
Strengths And Weaknesses
The biggest strength of the paper is that it achieves good results in both clustering and model-based RL performance, as evidenced by Table 1 and Figure 4. To the extent that these results are reproducible, they are of interest to the community.
The biggest issue with the paper is clarity. Highly critical points needed to understand the contributions of the paper and evaluate it are not made clear. For example, Section 3.2 makes conflicting statements about what kind of trajectory and environment labels are available. It first states that no environment labels are available, "so we cannot estimate environmental prototypes directly. Fortunately, we still have the trajectory label information". What is a trajectory label? What information does it provide? Does it indicate in which environment the trajectory was obtained? Wouldn't this be like an environment label? The section then goes on to show that each environment prototype is constructed as the "mean of its corresponding trajectory prototype". How do we know which trajectory prototypes correspond to which environments without environment label information?
The weakness with clarity also impacts the ability to evaluate the novelty of the work. This work seems to be largely based on prior work by Guo et al., including the main losses used (Eq 2 and 3). Therefore, it is not clear what this work contributes beyond the prior work by Guo et al. The authors include a section (3.4) on the "Difference to RIA" (Guo et al.), but this section seems to depend heavily on the point that the proposed approach uses trajectory label information, when (as stated above) it is not clear what a trajectory label is. Further, Section 3.4 states that unlike Guo et al., the proposed method estimates the similarity between different prototypes (which I take to mean different trajectories). But, according to the implementation details (Section 4.1), in Guo et al. "the trajectory segments are randomly sampled from the same trajectory", which seems to imply that Guo et al. applies the proposed losses to trajectories as well. Therefore the difference from Guo et al. is very unclear to me. For the authors to improve this point, it would be good to state how their application of the losses proposed by Guo et al. (Eq 2 & 3) differs for their specific use case. This could be done within sections 3.2 and 3.3, rather than having a high-level post-hoc explanation in section 3.4. Evidently there is some difference from RIA, since the results indicate their method can outperform RIA, but how this difference is achieved is not clear.
Clarity, Quality, Novelty And Reproducibility
Clarity: As stated above, clarity is a major issue for this paper.
Some places where the clarity was good:
The friction coefficient example in the intro was compelling and improved the clarity of the paper.
the top paragraph of page 4 is really well written and clear.
Some important points that were pretty unclear (in addition to the ones listed previously):
The abstract is not well written compared to the rest of the intro. One issue is the use of the phrase "the environment-specific factor", which doesn't really make sense. I think the phrase "latent environmental factors" or "unobserved latent environmental factors" would be closer to what you are trying to say.
Similarly, the intro refers to "environmental factor Z" without introducing Z. This could be improved by referring to latent environmental factors.
What does it mean to say "we also enable the built prototypes to optimize Eq. 4 to ensure the learned prototypes are semantically meaningful". How are the prototypes used here? What does this accomplish?
Why is the "no adaptation" version of TCML the more fair comparison? What is it? The results references it repeatedly but the reader has no context on how this affects the method.
What is the y-axis on Figure 5 (left)?
Some small details to fix:
Stating "estimating semantically meaningful Z for each environments is the first step for generalization of model-based RL" is a bold claim. Maybe say "a promising first step".
Referring to Figure 3 in the intro without a reference or explanation is disorganized. You can say "our results will show".
Grammar issues in the intro: "semantically meaningfully", and "our method propose to".
in the related work, "continue learning" -> "continual learning".
spelling/grammar issues on p. 5: "i.e., and" (choose either i.e. or and), "consider take account", and "same with Section 3.2 as ."
p. 6 "convthe"
p. 7 "model predictive model (MPC)"
the latex template used for the paper is not the correct version (it should say "Under review as a conference paper at ICLR" on the first page above the top bar)
p. 9 "sensitive analysis" -> "sensitivity analysis"
Originality: As detailed above, it seems that the most similar work (Guo et al.) also proposed causal and relational learning of environment-specific factors. The difference from Guo et al. is not made clear either in the related work section, or in the methods section. This needs to be improved.
Quality:
It's difficult to assess whether the methods proposed by the paper are based on a reasonable hypothesis about how to estimate environment dynamics, when details like what the CDE (causal direct effects) loss is are never actually explained. Why is CDE a good thing to do? What does it aim to estimate, and how? It also feels disorganized that Figure 2(a) is devoted to illustrating CDE, but a textual explanation of what it does is not provided.
Similarly, what is the relation loss? Why is it important and a good thing to do?
The last paragraph of Section 4.1 is a great thing to include (it explains why other methods are not benchmarked against, because they were shown to be worse than the baselines that were included). However, I am curious about the performance of model-free RL in the studied environments. Is model-based RL actually better performing for these tasks? |
ICLR | Title
Hierarchical Prototypes for Unsupervised Dynamics Generalization in Model-Based Reinforcement Learning
Abstract
Generalization remains a central challenge in model-based reinforcement learning. Recent works attempt to model the environment-specific factor and incorporate it as part of the dynamic prediction to enable generalization to different contexts. By estimating environment-specific factors from historical transitions, earlier research was unable to clearly distinguish environment-specific factors from different environments, resulting in poor performance. To address this issue, we introduce a set of environment prototypes to represent the environmental-specified representation for each environment. By encouraging learned environment-specific factors to resemble their assigned environmental prototypes more closely, the discrimination of factors between different environments will be enhanced. To learn such prototypes in the unsupervised manner, we propose a hierarchical prototypical method which first builds trajectory embeddings according to the trajectory label information, and then hierarchically constructs environmental prototypes from trajectory prototypes sharing similar semantics. Experiments demonstrate that environment-specific factors estimated by our method have superior clustering performance and can improve MBRL’s generalisation performance in six environments consistently.
1 INTRODUCTION
Reinforcement learning (RL) has achieved great success in solving sequential decision-making problems, e.g., board games (Silver et al., 2016; 2017; Schrittwieser et al., 2020), computer games (Mnih et al., 2013; Silver et al., 2018; Vinyals et al., 2019), and robotics (Levine & Abbeel, 2014; Bousmalis et al., 2018), but it still suffers from the low sample efficiency problem, making it challenging to solve real-world problems, especially for those with limited or expensive data (Gottesman et al., 2018; Lu et al., 2018; 2020; Kiran et al., 2020).In contrast, model-based reinforcement learning (MBRL) (Janner et al., 2019; Kaiser et al., 2019; Schrittwieser et al., 2020; Zhang et al., 2019; van Hasselt et al., 2019; Hafner et al., 2019b;a; Lenz et al., 2015) has recently received wider attention, because it explicitly builds a predictive model and can generate samples for learning RL policy to alleviate the sample inefficiency problem.
As a sample-efficient alternative, the model-based RL method derives a policy from the learned environmental dynamics prediction model. Therefore, the dynamics model’s prediction accuracy is highly correlated with policy quality (Janner et al., 2019). However, it has been evidenced that the learned dynamics prediction model is not robust to the change of environmental dynamics (Lee et al., 2020; Seo et al., 2020; Guo et al., 2021), and thus the agent in model-based RL algorithms has a poor generalization ability on the environments with different dynamics. Such a vulnerability to the change in environmental dynamics makes model-based RL methods unreliable in real-world applications where the factors that can affect dynamics are partially observed. For example, the friction coefficient of the ground is usually difficult to measure, while the changes in it can largely affect the dynamics when controlling a robot walking on the grounds, leading to the performance degradation of an agent trained by model-based RL methods (Yang et al., 2019; Gu et al., 2017; Nagabandi et al., 2018b).
Recent Studies (Seo et al., 2020; Nagabandi et al., 2018a; Lee et al., 2020; Guo et al., 2021) have demonstrated that incorporating environmental factor Z into dynamics prediction facilitates the generalisation of model-based RL methods to unseen environments. However, environmental factors
are unobservable in the majority of applications; for instance, the friction coefficient is not available for robots. Therefore, estimating semantical meaningful Z for each environments is the first step for generalization of model-based RL. However, it is not easy to implement, because the environment is hard to label. For example, it is impractical to measure the friction coefficient of every road. Without the label information of environments, Zs estimated from previous methods (Seo et al., 2020; Nagabandi et al., 2018a; Lee et al., 2020; Guo et al., 2021) cannot form clear clusters for different environments as Figure 3 shows. These entangled Zs cannot represent the distinct environmental specific information, and thus may deviate the learned dynamics prediction function from the true one, resulting in the poor generalization ability.
In this paper, we propose a hierarchical prototypical method (HPM) with the objective of learning an environment-specific representation with distinct clusters. By representing environment-specific information semantically meaningfully, HPM learns more generalizable dynamics prediction function. To achieve this, our method propose to construct a set of environmental prototypes to capture environment-specific information for each environment. By enforcing the estimated Ẑ to be more similar to its respective environmental prototypes and dissimilar to other prototypes, the estimated Ẑs can form compact clusters for the purpose of learning a generalizable dynamics prediction function. Because environmental labels are not available, we cannot construct environmental prototypes directly. To address this issue, we begin by developing easily-learned trajectory prototypes based on the trajectory label. Then, environmental prototypes can be created by merging trajectory prototypes with similar semantics, as suggested by the natural hierarchical relationship between trajectory and environment.
With the built hierarchical prototypical structure, we further propose a prototypical relational loss to learn Z from past transitions. Specifically, we not only aggregate the Ẑs with similar causal effects by optimizing the relational loss (Guo et al., 2021) but also aggregate Ẑ with its corresponding trajectory and environmental prototypes via the relational loss. In addition, to alleviate the over-penalization of semantically similar prototypes, we propose to penalize prototypes adaptively with the intervention similarity. In the experiments, we evaluate our method on a range of tasks in OpenAI gym (Brockman et al., 2016) and Mujoco (Todorov et al., 2012). The experimental results show that our method can form more clear and tighter clusters for Ẑs, and such Ẑs can improve the generalization ability of model-based RL methods and achieve state-of-art performance in new environments with different dynamics without any adaptation step.
2 RELATED WORK
Model-based reinforcement learning With the learned dynamics prediction model, Model-based Reinforcement Learning (MBRL) takes advantage of high data efficiency. The learned prediction model can generate samples for training policy (Du & Narasimhan, 2019; Whitney et al., 2019) or planning ahead in the inference (Atkeson & Santamaria, 1997; Lenz et al., 2015; Tassa et al., 2012). Therefore, the performance of MBRL highly relies on the prediction accuracy of the dynamics predictive model. To improve the predictive model’s accuracy of MBRL, several methods were proposed, such as ensemble methods (Chua et al., 2018), latent dynamics model (Hafner et al., 2019b;a; Schrittwieser et al., 2020), and bidirectional prediction (Lai et al., 2020). However, current predictive methods are still hard to generalize well on unseen dynamics, which hinders the application of MBRL methods in the real-world problems.
Dynamics generalization in model-based reinforcement learning To adapt the MBRL to unknown dynamics, meta-learning methods (Nagabandi et al., 2018a;b; Sæmundsson et al., 2018) attempted to update model parameters by updating a small number of gradient updates (Finn et al., 2017) or hidden representations of a recurrent model (Doshi-Velez & Konidaris, 2016). Then, using multi-choice learning, (Lee et al., 2020; Seo et al., 2020) attempted to learn a generalised dynamics model by incorporating environmental-specified information or clustering dynamics implicitly, with the goal of adapting any dynamics without training. Through relational learning and causal effect estimation, RIA (Guo et al., 2021) aims to explicitly learn meaningful environmental-specific information. However, the dynamics change learned by RIA still suffer from a high variance issue.
Prototypical methods By learning an encoder to embed data in a low-dimensional representation space, prototypical methods gain a set of prototypical embeddings, which are referred to as
prototypes (Asano et al., 2020; Caron et al., 2020b) that form the basis of this representation space. Prototypical methods aim to derive compact data representations gathering around corresponding prototypes (Li et al., 2021; Oord et al., 2018; Wang et al., 2021), which captures some basic semantic structures. Therefore, prototypical methods have been applied into many areas, e.g. self-supervised learning (Li et al., 2020; Caron et al., 2020a), few-shot learning (Snell et al., 2017; Bateni et al., 2020; Simon et al., 2020), domain adaptation (Tanwisuth et al., 2021) and continue learning (De Lange & Tuytelaars, 2021; Yu et al., 2020). In the RL area, (Yarats et al., 2021) ties representation learning with exploration through prototypical representations for image-based RL, while our method focuses on the unsupervised dynamics generalization problem in model-based RL, aiming to learn semantical meaningful dynamics change using prototypical method. Specifically, our method propose a hierarchical method to construct environmental prototypes from trajectory prototypes.
3 METHODS
In this section, we first introduce the formulation of the unsupervised dynamic generalization problem in model-based reinforcement learning. Then we present the details of how our hierarchical prototype method learns the environment-specific factors.
3.1 PROBLEM SETUP
We formulate the standard reinforcement learning as a markov decision process (MDP) M = (S,A, r, f, γ, ρ0) over discrete time (Puterman, 2014; Sutton & Barto, 2018), where S , A, γ ∈ (0, 1] and ρ0 are state space, action space, the reward discount factor, and the initial state distribution, respectively. Dynamics function f : S ×A → S gives the next state st+1 conditioned on the current state st and action at, and reward function r : S × A → R specifies the reward at each timestep t given st and at. The goal of RL is to learn a policy π(·|s) mapping from state s ∈ S over the action distribution to maximize the cumulative expected return Est∈S,at∈A[ ∑∞ t=0 γ
t r(st, at)] over timesteps. In model-based RL, we aim to learn a prediction model f̂ to approximate the dynamics function f , and then f̂ can generate training data to train policy π or predict the future sequences for planning. With the data provided by learned dynamics model f̂ , model-based RL has higher data efficiency and better planing ability compared with model-free RL.
In this paper, we consider the unsupervised dynamics generalization problem in model-based RL. Different from the standard reinforcement learning, there exists an unobserved variable Z that can affect the dynamics prediction function f in the dynamics generalization problem. The goal of dynamics generalization is to derive a generalizable policy from given K training MDPs {Mtri }Ki=0, and expect the policy can generalize well on L test MDPs {Mtej }Lj=0. Without losing generality, we assume all MDPs share the same state and action space but preserve different factor Z.
In the context of model-based reinforcement learning, we need to learn the dynamics function before learning policy. In order to generalize the dynamic functions on different environment, we need to incorporate the unobserved variable Z into dynamics prediction process, i.e., extending the dynamics function from f : S ×A → S to f : S ×A × Z → S . Since Z is not available, we should estimate it from past transition segments τt−k:t−1 = {(st−k, at−k), ..., (st−1, at−1)} (Seo et al., 2020; Lee et al., 2020; Guo et al., 2021).
Next, we will present how our hierarchical prototypes method estimates Z, and enable it to learn the dynamics function f that can generalize to environments with unseen dynamics. In Section 3.2, we present how our method hierarchically constructs prototypes as a representative embedding to represent environmental-specific information for each environment. In Section 3.3, we describe how we update prototypes dynamically and how to estimate environmental-specific factors Z from past transition segments using prototypes. Once Z are estimated, we describe how they enable dynamics function f to generalize well environments with different dynamics.
3.2 HIERARCHICAL ENVIRONMENT PROTOTYPES CONSTRUCTION
The objective of our method is to construct a set of prototypes to represent the environmental-specific information for each environment, and guide the context encoder to estimate environmental-specific variable Z from historical transition segments. In each training iteration, we randomly sample a trajectory from a subset of MDPs in the training MDPs. Because labels of MDPs are not available, we cannot estimate environmental prototypes directly. Furtunately, we still have the trajectory label information, and thus we can construct the prototypes for each sampled trajectory first. Specifically, we denote the prototype for j-th trajectory as cjtra. Because different trajectories may be sampled from a single environment, the trajectory prototypes from the same environment should share similar semantics for dynamics prediction. Therefore, we can construct environmental prototypes hierarchically from trajectory prototypes sharing similar semantics. In this way, environmental prototypes and trajectory prototypes form a natural hierarchical structure, and environmental prototypes can be constructed utilising trajectory label information even if no environmental label is available.
If we denote the wi,jtra as the semantical similarity between the trajectory prototypes c i tra and c j tra, we can construct a trajectory similarity matrix w as Figure 2 (b) shows, where each row of w, such as wi represents the similarity between citra and all other trajectory prototypes. Because it is unknown how many environments are in the sampled trajectories, we directly construct environmental prototypes cienv for each trajectory prototype c i tra. Specifically, each environmental prototype c i env is the mean of its corresponding trajectory prototype citra and c i tra’s top k similar trajectory prototypes.
cienv = 1
K ∑ k∈{T i} cktra, (1)
where Ti denotes the index set of the top-K similar trajectory prototypes with citra. In this way, we can obtain the i-th environmental prototypes, but before that, we need to calculate the semantic similarity matrix w.
Normally, we can directly use the euclidean distance to discriminate the similarity between different trajectory prototypes. However, this ignores the semantic effect of trajectory prototypes on dynamics
prediction. If two trajectories prototypes are from a single environment, their trajectory prototypes should share the same semantics, i.e., and their effects on the dynamics function should be the same. Therefore, we consider take account the semantic effect on the dynamics prediction into similarity estimation. However, it is challenging to estimate the effects of the trajectory prototype on the dynamics function because Z is not the only factor that can influence the dynamics function. To remove the effects of other factors, e.g. states and actions, on the dynamics function, our method draws inspiration from the recently proposed RIA method (Guo et al., 2021) to calculate the direct causal effects (CDE) of trajectory prototypes. By controlling all factors that have effects on the dynamics function over a mini-batch, we can solely estimate average CDE between different trajectory prototypes as their semantic difference d. Concretely, we compute d between two trajectory prototypes using a mini-batch of St and At pairs (sit, a i t) as Figure 2 (a) shows:
dij = 1
N N∑ k=1 |CDEcitra,cjtra(s k t , a k t )|, (2)
where N is the batch size, i and j are the id of trajectory prototypes. Please refer to Appendix A.6 for the details of CDE. With semantic difference d, we can convert it as the semantic difference w via w = exp(−dβ ), where β is a factor that controls the sensitivity of w. With the calculated similarity w, we can construct environmental prototypes via Eq. (1).
Next, we will describe how to update the built trajectory and environmental prototypes to ensure that hierarchical prototypes are representative for each trajectory and environment, and how they help learn the context encoder.
3.3 PROTOTYPICAL RELATIONAL LEARNING
As Figure 1 shows, we introduce a context encoder g parameterized by ϕ to estimate environmentalspecific factor ẑit from the past transition segments τt−k:t−1 = {(st−k, at−k), ..., (st−1, at−1)} following previous methods:
ẑit = g(τ i t−k:t−1;ϕ).
In order to learn the context encoder and encourage the estimated environmental-specific factor ẑit to be semantically meaningful, we optimize g via the proposed prototypical relational loss to form a clear cluster for Zs from the same environments. Concretely, we introduce a relational head (Patacchiola & Storkey, 2020) as a learnable function h to derive the environmental-specific estimation ẑit closely surrounded its associated cluster prototypes. To achieve this, we concatenate the ẑit and its assigned prototypes, e.g., citra as the positive pair, and the concatenation of other prototypes are negative pairs. Then we use the relational head h parameterized by φ to quantify the similarity score of ŷ. To increase the similarity score ŷ of positive pairs and decrease those of negatives, we can regard it as a simple binary classification problem to distinguish positive and negative pairs. This can be regarded as maximizing the mutual information between Zs and its corresponding prototypes (Please refer to (Tsai et al., 2020; Guo et al., 2021) and Appendix A.3). However, it neglects the semantic correlation among different prototypes, and so it may excessively penalize some semantically relevant prototypes. To alleviate such over-penalization, we propose to penalize prototypes adaptively with the intervention similarity (Guo et al., 2021) through the following objective:
Li−p−relationφ,ϕ = − 1
N(N − 1) N∑ i=1 N∑ j=1 [ [yi,j + (1− yi,j) · wi,j ] · log h([ẑi, cj ];φ)
+ (1− yi,j) · (1− wi,j) · log (1− h([ẑi, cj ];φ)) ] , (3)
where w ranges from 0 to 1, and we use it as the similarity between different prototypes. In addition, the first term of Eq. (3) clusters zit with prototypes c
j with the similarity weight wi,j , and the second term push them away with weight 1−wi,j . To maintain the hierarchical prototypes structure (Li et al., 2020; Guo et al., 2022), we simultaneously update the context encoder by optimizing the objective Eq. (3) between z with trajectory and environmental prototypes. Specifically, the calculation of similarity wenv between environmental prototypes and z is same with Section 3.2 as . In addition, we also optimize the relation loss among different Zs following (Guo et al., 2021; Li et al., 2020) because Z itself can be regarded as an instance prototype, and thus can retain the property of local smoothness and help bootstrap clustering.
In order to improve its generalization ability on different dynamics, we incorporate the estimated environment-specific ẑt into the dynamics prediction model f̂ and optimize the objective function following (Lee et al., 2020; Seo et al., 2020; Janner et al., 2019):
Lpredθ,ϕ = − 1
N N∑ i=1 log f̂(sit+1|sit, ait, g(τ it−k:t−1;ϕ); θ), (4)
where k is the length of transition segments, t is the current timestep, and N is the sample size. In addition, we also enable the built prototypes to optimize Eq. (4) to ensure that the learned prototypes are semantically meaningful. Overall, our method simultaneously optimize the prediction loss Eq. (4) and prototypical relational loss Eq. (3) with prototypes in different levels to learn context encoder g and semantic meaningful prototypes, which encourage the estimated environmental-specific Ẑ can form clear clusters, and thus can learn a generalizable prediction function f .
3.4 DIFFERENCE TO RIA
Our paper refers the idea of RIA (Guo et al., 2021) to estimate semantic similarities between different prototypes. However, our method differs RIA from three aspects: 1) RIA estimates the semantic similarities between different instance estimation ẑ while our method estimates the semantic similarities between different prototypes. Considering the number of prototypes are limited, the training procedure are faster and more stable than RIA. 2) Our method fully takes the advantage the hierarchy between trajectory and environments, and construct environmental prototype based on trajectory label information while RIA ignores it. Thus our method can achieve better performance than RIA. 3) RIA only pulls ẑ and other estimations with similar semantics, but our prototypical relational learning further pulls the ẑ and its corresponding trajectory ctra and environmental prototypes cenv .
4 EXPERIMENT
In this section, we perform experiments to evaluate the effectiveness of our approach by answering the following questions: 1) Can our method encourages the learned Z to form a clear cluster? (Section 4.2); 2) Can the learned Ẑ with the clear cluster reduce the dynamics prediction errors in model-based RL? (Supplementary Material A.2); 3) Can the learned Ẑ with the clear cluster promote the performance of model-based RL in environments with unseen dynamics? (Section 4.3); 4) Is our method sensitive to hyperparameters? (Section 4.4)
4.1 ENVIRONMENTAL SETUP
Implementation details Our method includes three learnable functions and a set of learnable trajectory prototypes. The learnable functions are context encoder, relational head and prediction head, and they all are constructed with MLP and optimized by Adam (Kingma & Ba, 2014) with 1e-3 learning rate. During the training procedure, the trajectory segments are randomly sampled from the same trajectory to break the temporal correlations of the training data, which was also adopted by (Seo et al., 2020; Guo et al., 2021). Specifically, we combine k = 3 similar trajectory embedding into environmental embedding, and the length of the transition segments is 10, and the hyper-parameters are the same for all experiments, and details can be found in supplementary material A.1. Tasks Following the previous methods (Lee et al., 2020; Seo et al., 2020), we perform experiments on the classic control algorithm (Pendulum) from OpenAI gym (Brockman et al., 2016) and simulated robotics control tasks (HalfCheetah, Swimmer, Ant, Hopper, Slim-Humanoid) from Mujoco physical engine (Todorov et al., 2012). Dynamics settings To construct different dynamics of environments, we change the environmental parameters (e.g., length and mass of Pendulum) and predefine them in the training and test environmental parameters lists following previous methods (Zhou et al., 2019; Packer et al., 2019; Lee et al., 2020; Seo et al., 2020; Guo et al., 2021). Specifically, for the convthe training environmental parameters lists for all tasks are {0.75, 0.8, 0.85, 0.90, 0.95, 1, 1.05, 1.1, 1.15, 1.2, 1.25}, and test environmental parameters lists are {0.2, 0.4, 0.5, 0.7, 1.3, 1.5, 1.6, 1.8}. We can see that the parameters in test list are out of range of the parameters in the training set. At the training time, we randomly sample the parameters from the training parameter list to train our context encoder and dynamics
prediction model. Then we test our model on the environments with unseen dynamics sampled from the test parameter list. All details are given in supplementary material A.1. Planning Following (Lee et al., 2020; Seo et al., 2020), we use the model predictive model (MPC) (Maciejowski, 2002) to select actions based on learned dynamics prediction model, and assume that reward functions are known. In addition, we use the cross-entropy method (CEM) (De Boer et al., 2005) to find the best action sequences. Baselines In this paper, we compare our approach with the following state-of-the-art model-based RL methods on dynamics generalization:
• Context-aware dynamics model (CaDM) (Lee et al., 2020): This method design several auxiliary loss, including backward and future states prediction to learn the context from transition segments.
• Trajectory-wise Multiple Choice Learning (TMCL) (Seo et al., 2020): TMCL introduces multi-choice learning to adapt to different environments. For a fair comparison, we use the no adaptation version of this method.
• Relation Intervention Approach (RIA) (Guo et al., 2021): This method proposes to use relational intervention loss to cluster Zs from the same environments.
It has been clearly evidenced that Probabilistic ensemble dynamics model (PETS) (Kurutach et al., 2018) and Meta learning based model-based RL methods, e.g. Recurrent model ReBAL and hidden-parameter model GrBAL (Nagabandi et al., 2018b;a), perform worse than CaDM (Lee et al., 2020),TMCL (Seo et al., 2020) and RIA (Guo et al., 2021), so we do not consider them as baselines in our paper.
4.2 CLUSTER VISUALIZATION AND ANALYSIS
We perform PCA visualization of estimated Ẑs from baselines and our method as Figure 3 to evaluate the cluster performance of estimated Ẑs. We can see that our method can achieve better cluster performance qualitatively. Specifically, most Ẑs estimated by RIA (Guo et al., 2021) have good cluster performance in general, but the outliers decrease the cluster performance. By contrast, we can
see that there are fewer outliers in our method than them in RIA because the built prototypes and the proposed prototypical relational loss can enforce constraints into estimated Ẑs. More qualitatively cluster comparisons can be found in Supplementary Material A.8.
We also quantitatively evaluate the cluster performance of Ẑs estimated by baselines and our method. Here we firstly perform k-means (MacQueen et al., 1967) on the estimated Ẑs, and then use the ground-truth environmental label to calculate the cluster performance. Here we use the popular mutual information-based metric AMI (Vinh et al., 2010), random-index metric ARI(Hubert & Arabie, 1985) and V-means (Rosenberg & Hirschberg, 2007) as the evaluation metrics. The results are shown in Table 3, we can see that Ẑs estimated by our method achieves the highest cluster performance. More quantitatively cluster comparisons can be found in Supplementary Material A.8.
4.3 PERFORMANCE COMPARISONS
Slim_Humanoid
Hopper
Then, we evaluate the generalization of model-based RL agents trained by our methods and baselines on test environments with unseen dynamics. Following the setting of (Seo et al., 2020), we perform experiments across five runs, and show the test returns on the test environments in Figure 6. Note that the results are slightly different from the results in RIA and TMCL paper since we change the parameter lists that change the environmental dynamics. Specifically, we change the parameter lists of all environments to the same for the convenience of performing environments.
As Figure 6 shows, we can see that our method can achieve significantly better performance than baselines in Ant, Halfcheetah, and Pendulum. Specifically, we can see that our method outperforms the second-best method RIA by 20% in Ant and Halfcheetah environments, which indicates that the changing parameter can largely change their dynamics. In addition, we can see that our method achieves only slightly better performance than baselines in Hopper, Swimmer, and Slim_Humanoid problems. For Hopper and Slim_Humanoid environment, we observe that both RIA and our method can achieve comparable results in all test environments, which indicates that the change of dynamics for Hopper is easy to model and solve. For the Swimmer environment, we observe that TMCL (Seo et al., 2020) sometimes may have a significant performance decline at the final training iteration. This may be because that TMCL may fail to learn the modalities of dynamics function in the no adaptation version. Also, our method still achieves better performance than RIA at the Swimmer task.
4.4 ABLATION STUDY
In this section, we first perform a sensitive analysis of how many trajectory prototypes should be combined into environmental prototypes. The experiments are conducted at the Pendulum task, and the results are shown as the left image of Figure 5, we can see that no matter what k it is, our method consistently outperforms the baseline CaDM (Lee et al., 2020), which indicates that our method is robust to the selection of k value. Specifically, k = 1 means that there are no hierarchical prototypes because one trajectory prototype can decide one environmental prototype, and thus environmental prototypes are the same as trajectory prototypes. We can see that all experimental results with k > 1 are better than the experimental result with k = 1, which shows the effectiveness of our proposed hierarchical prototypes method and the necessity of the built environmental prototypes. The results of k = 1 achieve the best performance on the Pendulum task, so we use it as the default parameter in all experiments.
We also perform an ablation study about the similarity metric used to calculate the similarity among trajectory prototypes. For most cluster methods, e.g. k-means (MacQueen et al., 1967), they usually calculate the similarity among entities using the Euclidean distance, while our method uses the direct causal effect as the similarity metric. To evaluate the effectiveness of the similarity metrics based on direct causal effect (Pearl, 2013), we perform experiments on the Halfcheetah and Pendulum tasks, and we can see that using the causal effect to calculate the similarities among trajectory prototypes can achieve better performance than using Euclidean distance on both tasks.
5 LIMITATION
Our paper only considers the unsupervised dynamics generalization in model-based reinforcement learning, but model-free RL also suffers from this problem, and we will apply our method to modelfree RL in future work. In addition, there are many other generalization problems in reinforcement learning area, e.g. observation generalization (Wang et al., 2020; Kirk et al., 2021; Ghosh et al., 2021) and action generalization (Jain et al., 2020), and it would be interesting to extend our method into other generalization settings and train generalizable agents.
6 CONCLUSION
In this paper, we focus on the unsupervised dynamics generalization problem in model-based reinforcement learning, and propose a hierarchical prototypical method to construct environmental prototypes in an unsupervised manner. With the learned environmental prototypes, we further propose a prototypical relational loss to learn a context encoder to estimate environmental-specific factors from past transition segments, which enables the dynamics prediction function in model-based reinforcement learning to generalize well on environments with unseen dynamics. The experiments demonstrate that our method can form clearer and tighter clusters for Ẑs from the same environment and improve the performance of model-based agents in new environments with unseen dynamics.
7 REPRODUCIBILITY STATEMENT
We acknowledge the importance of reproducibility for research work and try whatever we can to ensure the reproducibility of our work. As for the implementation of our method, details such as hyperparameters are provided in Section 4.1 and Appendix A.1. We will publicly release all codes after the acceptance of this paper.
A APPENDIX
We promise that we will public all codes after the acceptance of this paper and we public all training details at Appendix A.1 and A.3.
A.1 ENVIRONMENTAL SETTINGS
We follow the environmental settings of Lee et al. (2020); Guo et al. (2021) and give the details of settings as follows:
• Pendulum We modify the mass m and the length l of Pendulum to change its dynamics. • Half-Cheetah We modify the mass of rigid link m and the damping of joint d of Half-
Cheetah agent to change its dynamics.
• Swimmer We modify the mass of rigid link m and the damping of joint d of Swimmer agent to change its dynamics.
• Ant We modify the mass of ant’s leg m to change its dynamics. Specifically, we modify two legs by multiplying its original mass with m, and others two with 1m .
• Slim_Humanoid We modify the mass of rigid link m and the dampling of joint d of the Slim_Humanoid agent to change its dynamics.
• Hopper We modify the mass of m of the Hopper agent to change its dynamics.
Specifically, all training and test parameter lists are set as {0.75, 0.8, 0.85, 0.90, 0.95, 1, 1.05, 1.1, 1.15, 1.2, 1.25} and {0.2, 0.4, 0.5, 0.7, 1.3, 1.5, 1.6, 1.8}, respectively.
A.2 ALGORITHM
The training procedure is give at Algorithm 1.
A.3 TRAINING DETAILS
Similar to the Lee et al. (2020); Guo et al. (2021), we train our model-based RL agents and context encoder for 20 epochs, and we collect 10 trajectories by a MPC controller with 30 horizon from environments at each epoch. In addition, the cross entropy method (CEM) with 200 candidate actions is chosen as the planing method. Specifically, the batch size for each experiment is 128, β is 6e-1. All module are learned by a Adam optimizer with 0.001 learning rate.
A.4 PREDICTION ERROR
A.5 NETWORK DETAILS
Similar to the Lee et al. (2020), the context encoder is constructed by a simple 3 hidden-layer MLP, and the output dim of environmental-specific vector ẑ is 10. The relational head is modelled as a single FC layer. The dynamics prediction model is a 4 hidden-layer FC with 200 units.
Algorithm 1 The training algorithm process of our relational intervention approach Initialize parameters of context encoder ϕ, dynamics prediction model θ and relational head φ Initialize dataset B ← ∅ for Each Iteration do
sample environmentsMi from training environments {Mtri }Ki=0 ▷ Collecting Data for T = 1 to TaskHorizon do
Get the estimation of the environment-specified factor ẑit−k:t−1 = g(τ i t−k:t−1;ϕ) Collect (st, at, st+1, rt, τ it−k:t−1) fromMi with dynamics prediction model θ Update B ← B ∪ (st, at, st+1, rt, τ it−k:t−1) Initialize trajectory prototype Citra for each sampled trajectory
end for for Each Dynamics Training Iteration do ▷ Update ϕ,θ and φ
for k = 1 to K do Sample data τ i,b,Pt−k:t−1 , C i tra and τ j,b,P t−k:t−1 , C j tra with batch size B,from B
Get the estimation of the environment-specified factor ẑi,B,Pt−k:t−1 = g(τ i,B,P t−k:t−1;ϕ) and
ẑj,B,Pt−k:t−1 = g(τ j,B,P t−k:t−1;ϕ)
Estimate the similarity w between Citra and C j tra Construct w between Citra and C j tra Combing top_k similar Citra into environmental prototypes C i env Ltot=Lpredϕ,θ (τ i,B,,K t:M , ẑ i,B,P t−k:t−1)+L i−relation ϕ,φ (ẑ i,B,P t−k:t−1, C
i) with prototypes in different levels.
Update θ , ϕ , φ← ∇θ,ϕφ 1BL tot
end for end for
end for
A.6 DIRECT CAUSAL EFFECT BETWEEN TRAJECTORY PROTOTYPES
Concretely, the direct causal effect difference between two trajectory prototypes cjtra and c k tra can be calculated through the controllable causal effect (Pearl, 2013) given as following:
CDE c j tra,c k tra
(st, at) =E[St+1|do(St = st, At = at), do(Z = cjtra)] (5)
− E[St+1|do(St = st, At = at), do(Z = cktra)] (6)
=E[St+1|St = st, At = at, Z = cjtra]− E[St+1|St = st, At = at, Z = c k tra], (7)
where do is the do-calculus (Pearl, 2000). Because we control all variables that can influence on the St+1, and there is no other confounder between the mediators (St, At) and St+1 except Z (Guo et al., 2021), we can remove all do operators in Eq. (6). Therefore, the intervention distribution of controlling Z, and (St, At) Eq. (6) is equal to the conditional distribution Eq. (7). In addition, the direct causal effects between cjtra and c k tra may differ for different values of St and At, so we should sample St and At independently of Z to calculate the average controlled direct effect.
Concretely, we directly use a mini-batch of St and At pairs (sit, a i t) to calculate the average controlled direct effect of them as Figure 2 (a) shows:
wjk = 1
N N∑ i=1 |CDEcjtra,cktra(s i t, a i t)|, (8)
where N is the batch size, j and k are the id of trajectory prototypes.
A.7 CONNECTION BETWEEN RELATION LOSS AND MUTUAL INFORMATION
We denote the environmental-specific factor as Z and its prototypes as C. By definition, the mutual information between Z and C should be:
I(Z;C) = EPZC [log( p(z, c)
p(z)p(c) )] (9)
where PZC is he joint distribution of Z and C, and PZ and PC are their marginal distributions. To estimate mutual information between Z and C, we can use the probabilistic classifier method proposed by Tsai et al. (2020). Concretely, we can use a Bernoulli random variable Y to classify one given data pair (z, c) from the joint distribution PZC (Y = 1) or from the product of marginal distribution P (Z)P (C) (Y = 0) . Therefore, the mutual information I(Z;C) between Z and C can be rewrite as:
I(Z;C) = EPZC [log( p(z, c)
p(z)p(c) )]
= EPZC [log( p(z, c|Y = 1) p(z, c|Y = 0) )]
= EPZC [log( p(Y = 0)P (Y = 1|z, c) p(Y = 1)P (Y = 0|z, c) )] (10)
Obviously, p(Y=0)p(Y=1) can be approximated by the sample size, i.e. nPZPC nPZC , while P (Y=1|z,c)P (Y=0|z,c) can be measured by a classifier h(Y |z, c) with the below our relational loss:
Lrelationφ,ϕ = − [ Y · log h([z, c];φ) + (1− Y ) · log (1− h([z, c];φ)) ] , (11)
where Y = 1 if the given pair (z, c) is from the joint distribution PXY , and Y = 0 if the given pair (z, c) is from the product of the marginal distributions PZPC . Because p(Y=0) p(Y=1) tend to be a constant, optimizing our relation loss is actually estimating the mutual information I(Z;C) between Z and C. Therefore, optimizing Eq. (11) is actually maximizing the mutual information between (ẑ) and its corrsponding prototype which represents the semantics of trajectorey or environment. If the readers are interested in the concrete bound about this method to estimate mutual information, please refer to Tsai et al. (2020); Guo et al. (2021).
A.8 VISUALIZATION AND ANALYSIS | 1. What is the focus of the paper regarding learning from multiple MDPs?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its promise and clarity?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper discusses the problem of learning from multiple MDPs, each corresponding to a different value for hidden factors such as physical properties (friction, gravity etc.) The authors propose estimating an embedding based on a similarity matrix of trajectories, and then use a contrastive loss to learn a
z
that matches with the estimated embeddings. The learnt
z
is fed back into a dynamics model and used in a MBRL algorithm.
Strengths And Weaknesses
Strengths
Important and Relevant problem
Promising general direction for the solution approach
Weakness
Unclear writing
Bad paper structure
Lack of novelty in the proposed approach
Clarity, Quality, Novelty And Reproducibility
The paper is quite confusing to read, lacks quality and novelty, and is fairly hard to reproduce (given the unclear description). |
ICLR | Title
Taking Apart Autoencoders: How do They Encode Geometric Shapes ?
Abstract
We study the precise mechanisms which allow autoencoders to encode and decode a simple geometric shape, the disk. In this carefully controlled setting, we are able to describe the specific form of the optimal solution to the minimisation problem of the training step. We show that the autoencoder indeed approximates this solution during training. Secondly, we identify a clear failure in the generalisation capacity of the autoencoder, namely its inability to interpolate data. Finally, we explore several regularisation schemes to resolve the generalisation problem. Given the great attention that has been recently given to the generative capacity of neural networks, we believe that studying in depth simple geometric cases sheds some light on the generation process and can provide a minimal requirement experimental setup for more complex architectures.
1 INTRODUCTION
Autoencoders are neural networks, often convolutional neural networks, whose purpose is twofold. Firstly, to compress some input data by transforming it from the input domain to another space, known as the latent, or code, space. The second goal of the autoencoder is to take this latent representation and transform it back to the original space, such that the output is similar, with respect to some criterion, to the input. One of the main objectives of this learning process being to reveal important structure in the data via the latent space, and therefore to represent this data in a more meaningful fashion or one that is easier to model. Autoencoders have been proven to be extremely useful in many tasks ranging from image compression to synthesis. Many variants on the basic idea of autoencoders have been proposed, the common theme being how to impose useful properties on the learned latent space. However, very little is known about the actual inner workings and mechanisms of the autoencoder.
The goal of this work is to investigate these mechanisms and describe how the autoencoder functions. Many applications of autoencoders or similar networks consider relatively high-level input objects, ranging from the MNIST handwritten digits to abstract sketches of conceptual objects (Zhu et al. (2016); Ha & Eck (2017)). Here, we take a radically different approach. We consider, in depth, the encoding/decoding processes of a simple geometric shape, the disk, and investigate how the autoencoder functions in this case. There are several important advantages to such an approach. Firstly, since the class of objects we consider has an explicit parametrisation, it is possible to describe the “optimal” performance of the autoencoder, ie. can it compress and uncompress a disk to and from a code space of dimensionality 1 ? Secondly, the setting of this study fixes certain architecture characteristics of the network, such as the number of layers, leaving fewer free parameters to tune. This means that the conclusions which we obtain are more likely to be robust than in the case of more high-level applications. Finally, it is easier to identify the roles of different components of the network, which enables us to carry out an instructive ablation study.
Using this approach, we show that the autoencoder approximates the theoretical solution of the training problem when no biases are involved in the network. Secondly, we identify certain limitations in the generalisation capacity of autoencoders when the training database is incomplete with respect to the underlying manifold. We observe the same limitation using the architecture of Zhu et al. (2016), which is considerably more complex and is proposed to encode natural images. Finally, we analyse several regularisation schemes and identify one in particular which greatly aids in overcoming this generalisation problem.
2 PRIOR WORK
The concept of autoencoders has been present for some time in the learning community (LeCun (1987); Bourlard & Kamp (1988)). The objective is to train two networks, an “encoder” and a “decoder”, which transform the input data to and from a code, or latent, space which is learned by the algorithm. In many applications, the dimensionality d of the latent space is smaller than that of the original data, so that the autoencoder is encouraged to discover useful features of the data. In practice, we obviously do not know the exact value of d, but we would still like to impose as much structure in the latent space as possible. This idea lead to the regularisation in the latent space of autoencoders, which comes in several flavours. The first is the sparse autoencoder (Ranzato et al. (2007)), which attempts to have as few active (non-zero) neurons as possible in the network. This can be done either by modifying the loss function to include sparsity-inducing penalisations, or by acting directly on the values of the code z. In the latter option, one can use rectified linear units (ReLUs) to encourage zeros in the code (Glorot et al. (2011)) or simply specifying a maximum number of non-zero values as in the “k-sparse” autoencoder (Makhzani & Frey (2013)). Another approach, taken by the variational autoencoder, is to specifying the a priori distribution of the code z. Kingma & Welling (2014) use the Kullback-Leibler divergence to achieve this goal, and the authors suppose a Gaussian distribution of z. The “contractive” autoencoder (Rifai et al. (2011)) encourages the derivatives of the code with respect to the input image to be small, meaning that the representation of the image should be robust to small changes in the input.
Autoencoders can be applied to a variety of problems, such as denoising (“denoising autoencoder”) or image compression (Ballé et al. (2016)). For a good overview of autoencoders, see the book of Goodfellow et al. (Goodfellow et al. (2016)). Recently, a great deal of attention has been given to the capacity of CNNs, and in particular generative adversarial networks (GANs) (Radford et al. (2015)) or autoencoders, to generate new images. It is well-known that these networks have important limitations, such as the tendency to produce low quality images or to reproduce images from the training set because of mode collapse. But despite these limitations, many works have investigated the generative capacity of such networks, see for instance Dosovitskiy & Brox (2016); Salimans et al. (2016); Reed et al. (2016); Zhu et al. (2016) and often demonstrated intriguing visual results. In this context, a natural question is : how efficient are such networks at inventing realistic new images ? How well do they generalize visual content ?
3 HOW DO AUTOENCODERS PROCESS VISUAL IMAGES ?
Although autoencoders have been extensively studied, very little is known concerning the actual inner mechanics of these networks, in other words quite simply, how they work. This is obviously much too vast a question in the general case, however very often deep learning is applied to the specific case of images. In this work, we aim to discover how, with a cascade of simple operations common in deep networks, an autoencoder can encode and decode very simple images. In view of this goal, we propose to study in depth the case of disks of variable radii. This controlled setting and
careful study of the autoencoder are the main goals of the paper, and structure our work throughout. Before continuing, we describe our autoencoder in a more formal fashion.
3.1 NOTATION AND AUTOENCODER ARCHITECTURE
We denote input images with x ∈ Rm×n and z ∈ Rd, wherem and n are the height and the width of the image, respectively, and d is the dimension of z. The autoencoder consists of the couple (E,D), the encoder and decoder which transform to and from the “code” space, with E : Rm×n → Rd and D : Rd → Rm×n. As mentioned, the goal of the auto-encoder is to compress and uncompress a signal into a representation with a smaller dimensionality, while losing as little information as possible. Thus, we search for the parameters of the encoder and the decoder, which we denote with ΘE and ΘD respectively, by minimising
(ΘE ,ΘD) = argmin ΘE ,ΘD ∑ x ||x−D(E(x))||22 (1)
The autoencoder consists of a series of convolutions with filters of small compact support, subsampling/up-sampling, biases and non-linearities. The values of the filters are termed the weights of the network, and we denote the encoding filters with w`,i, where ` is the layer number and i the number of the filter. Similarly, we denote the decoding filters w′`,i, the encoding and decoding biases b`,i and b′`,i. We choose leaky ReLUs for the non-linearities :
φα(x) = { x, for x ≥ 0 αx, for x < 0 , (2)
with parameter α = 0.2. Thus, the output of a given encoding layer is given by
El+1i = φα(E l ∗ w`,i + b`,i), (3)
and similarly for the decoding layers (except for an zero-padding upsampling prior to the convolution) , with weights and biases w′ and b′, respectively.
We consider images of a fixed (square) spatial support Ω = [0,m− 1]× [0,m− 1] and also that the subsampling rate s is fixed. In the encoder, subsampling is carried out until and z is a single scalar. Thus, the number of layers in our encoder and decoder is not an independent parameter. We set the support of all the convolutional filters in our network to 3× 3. The architecture of our autoencoder remains the same throughout the paper, and is shown in Figure 1. We summarise our parameters in Table 1. We now investigate the inner mechanics of autoencoders in the case of a simple geometric shape: the disk.
3.2 AUTOENCODING DISKS
Our training set consists of binary images of centred disks of random radii, with one disk per image in the test database. Each disk image is determined by the indicator function of a disk of radius r, and is therefore binary. Theoretically, an optimal encoder would only need one scalar to represent the image. Therefore the architecture in Figure 1 is set up to ensure a code size d = 1. Our first important observation (see Figure 2) is that not only can the network learn to encode/decode
disks, but that the code z which is learned can be interpolated and the corresponding decoding is meaningful. Thus, in this case, the autoencoder is able to encode/decode the data in an optimal fashion. We now proceed to see how the autoencoder actually works on a detailed level, starting with the encoding step.
3.2.1 ENCODING A DISK
Encoding a centred disk of a certain radius to a scalar z can be done in several ways, the most intuitive being integrating over the area of the disk (encoding a scalar proportionate to its area) or integrating over the perimeter of the disk (encoding a scalar proportionate to its radius). The empirical evidence given by our experiments points towards the first option, since z seems to represent the area and not the radius of the input disks (see Figure 2). If this is the case, the integration operation can be done by means of a simple cascade of linear filters. As such, we should be able to encode the disks with a network containg only convolutions and sub-sampling, and no having non-linearities. We have verified experimentally this with such an encoder.
3.2.2 DECODING A DISK
A more difficult question is how does the autoencoder convert a scalar, z, to an output disk of a certain size (the decoding process). One approach to understanding the inner workings of autoencoders, and indeed any neural network, is to remove certain elements of the network and to see how it responds, otherwise known as an ablation study. We found that removing the biases of the autoencoder leads to very interesting observations. While, as we have shown, the encoder is perfectly able to function without these biases, this is not the case for the decoder. Figure 3 shows the results of this ablation. The decoder learns to spread the energy of z in the output according to a certain function g. Thus, the goal of the biases is to shift the intermediary (hidden layer) images such that a cut-off can be carried out to create a satisfactory decoding. We have investigated the behaviour of the decoder without biases in detail. In particular, we will derive an explicit form for the energy minimized by the network, for which a closed form solution can be found (see Appendix A), but more importantly for which we will show experimentally that the network finds the right solution. We first make a general observation about this configuration (without biases).
Proposition 1. [Positive Multiplicative Action of the Decoder Without Bias] Consider a decoder, without biases D(z) = DL ◦ · · · ◦ D1(z), with D`+1 = φα ( U(D`) ∗ w′`i ) , where U stands for upsampling with zero-padding. In this case, the decoder acts multiplicatively on z, meaning that
∀z, ∀λ ∈ R+, D(λz) = λD(z).
Proof. For a fixed z and for any λ > 0. We have
D1(λz) = φα (U(λz) ∗ w′`) = max (λ(U(z) ∗ w′`), 0) + αmin (λ(U(z) ∗ w′`), 0) = λmax (U(z) ∗ w′`, 0) + λαmin (U(z) ∗ w′`, 0) = λφα (U(z) ∗ w′`) = λD1(z). (4)
This reasoning can be applied successively to each layer up to the output y. When the code z is one dimensional, the decoder can be summarized as two linear functions, one for positive codes and a second one for the negative codes. However, in all our experiments, the autoencoder without bias has chosen to use only one possible sign for the code, resulting in a linear decoder.
Furthermore, the profiles in Figure 3 suggest that a single function is learned, and that this function is multiplied by a factor which is constant for each radius. In light of Proposition 1, this means that the decoder has chosen a fixed sign for the code and that the decoder is linear. This can be expressed as y(t, r) = h(r)f(t), (5) where t is a spatial variable and r ∈ (0, m2 ] is the radius of the disk. This is checked experimentally in Figure 7 in Appendix A. In this case, we can write the optimisation problem of the decoder as
f̂ , ĥ = argmin f,h ∫ R 0 ∫ Ω (h(r)f(t)− 1Br (t))2 dt dr, (6)
where R is the maximum radius observed in the training set, Ω = [0,m − 1] × [0,m − 1] is the image domain, andBr is the disk of radius r. Note that we have expressed the minimisation problem for continuous functions f . This is not strictly the case, especially for images of small disk radii, however for our purposes the approximation is good. In this case, we have the following proposition.
Proposition 2 (Decoding Energy for an autoencoder without Biases). The decoding training problem of the autoencoder without biases has an optimal solution f̂ that is radially symmetric and
maximises the following energy:∫ R 0 (∫ r 0 f(ρ)1[0,r](ρ) ρ dρ )2 dr =: E(f), (7)
under the (arbitrary) normalization ‖f‖22 = 1.
Proof. When f is fixed, the optimal h for Equation (6) is given by
ĥ(r) = 〈f,1Br 〉 ‖f‖22 , (8)
where 〈f,1Br 〉 = ∫ Ω f(t)1Br (t) dt. After replacing this in Equation (6), we find that
f̂ = argmin f ∫ R 0 −〈f,1Br 〉 2 ‖f‖2 dr = argminf ∫ R 0 −〈f,1Br 〉22 dr, (9)
where we have chosen the arbitrary normalisation ‖f‖22 = 1. The form of the last equation shows that the optimal solution is obviously radially symmetric1. Therefore, after a change of variables, the energy maximised by the decoder can be written as∫ R
0 (∫ r 0 f(ρ)1[0,r](ρ) ρ dρ )2 dr =: E(f), (10)
such that ‖f‖22 = 1. In Appendix A, we compare the numerical solution of this problem with the actual profile learned by the network, yielding a very close match. This result is very interesting, since it shows that the training process has achieved the optimal solution, in spite of the fact that the loss is non convex.
3.2.3 GENERALISATION AND REGULARISATION
As we have recalled in Section2, many works have recently investigated the generative capacity of autoencoders or GANs. Nevertheless, it is not clear that these architectures truly invent or generalize some visual content. A simpler question is : to what extent is the network able to generalise a simple geometric notion ? In this section, we address this issue in our restricted but interpretable case.
1If not, then consider its mean on every circle, which decreases the L2 norm of f while maintaining the scalar product with any disk. We then can increase back the energy by deviding by this smaller L2 norm according to ‖f‖2 = 1.
For this, we study the behaviour of our autoencoder when examples are removed from the training dataset. In Figure 4, we show the autoencoder result when the disks with radii above a certain threshold R are removed. The radii of the left three images (with a green border) are present in the training database, whereas the radii of the right three (red border) have not been observed. It is clear that the network lacks the capacity to extrapolate further than this radius. Indeed, the autoencoder seems to project these disks onto smaller, observed, disks, rather than learning the abstraction of a disk.
Again by removing the biases from the network, we may explain why the autoencoder fails to extrapolate when a maximum radius R is imposed. In Appendix B, we show experimental evidence that in this situation, the autoencoder learns a function f whose support is restricted by the value of R, leading to the autoencoder’s failure. However, a fair criticism of the previous experiment is simply that the network (and deep learning in general) is not designed to work on data which lie outside of the domain observed in the training data set. Nevertheless, it is reasonable to expect the network to be robust to such “holes” inside the domain. Therefore, we have also analysed the behaviour of the autoencoder when we removed training datapoints whose disks’ radii lie within a certain range, between 11 and 18 pixels (out of a total of 32). We then attempt to reconstruct these points in the test data. Figure 5 shows the results of this experiment. Once again, in the unknown regions the network is unable to recreate the input disks. Goodfellow et al. (2016) (page 521) and Bengio & Monperrus (2005) propose several explanations in the deep learning literature of this phenomenon, such as a high curvature of the underlying data manifold, noisy data or high intrinsic dimensionality of the data. In our setting, none of these explanations is sufficient. Thus we conclude that, even in the simple setting of disks, the “classic” autoencoder cannot generalise correctly when a database contains holes.
This behavior is potentially problematic for applications which deal with more complex natural images, lying on a high-dimensional manifold, as these are likely to contain such holes. We have therefore carried out the same experiments using the state-of-the-art “iGAN” approach of Zhu et al. (2016), which is in turn based on the work of Radford et al. (2015), “DCGAN”. The visual results of their algorithm are displayed in Appendix C. We trained their network using both a code size of d = 100 (as proposed by the authors), and d = 1 in order to ensure fair comparisons. Indeed, in our case, not only the dimension of the latent space should be d = 1, but also the amount of training data is not enough to work with d = 100. Although the d = 1 case leads to improved results, in
both cases the network fails to correctly autoencode the disks belonging to the unobserved region. This shows that the generalisation problem is likely to be ubiquitous, and indeed observed in more sophisticated networks, designed to learn natural images manifolds, even in the simple case of disks. We therefore believe that this issue deserves careful attention. Actually this experiment suggets that the capacity to generate new and simple geometrical shapes could be taken as a minimal requirement for a given architecture.
In order to address the problem, we now investigate several regularisation techniques whose goal is to aid the generalisation capacity of neural networks.
3.2.4 REGULARISATION
We would like to impose some structure on the latent space in order to interpolate correctly in the case of missing datapoints. This is often achieved via some sort of regularisation. This regularisation can come in many forms, such as imposing a certain distribution in the latent space, as in variational autoencoders (Kingma & Welling (2014)), or by encouraging z to be sparse, as in sparse auto-encoders (Ranzato et al. (2007); Makhzani & Frey (2013)). In the present case, the former is not particularly useful, since a probabilistic approach will not encourage the latent space to correctly interpolate. The latter regularisation does not apply, since we already have d = 1. Another commonly used approach is to impose an `2 penalisation of the weights of the filters in the network. The idea behind this bears some similarity to sparse regularisation; we wish for the latent space to be as “simple” as possible, and therefore hope to avoid over-fitting.
We have implemented several regularisation techniques on our network. Firstly, we attempt a simple regularisation of the latent space by requiring a “locality-preservation” property as suggested in Hadsell et al. (2006); Alain & Bengio (2014); Liao et al. (2017), namely that the `2 distance between two images (x,x′) be maintained in the latent space. This is done by randomly selecting a neighbour of each element in the training batch. Secondly, we regularise the weights of the encoder and/or the
decoder. Thus, our training attempts to minimise the sum of the data term, ‖x−D(E(x))‖22, and a regularisation term λψ(x, θ), which can take one of the following forms:
• Type 1 : ψ(x, x′) = (‖x− x′‖22 − ‖E(x)− E(x′)‖22)2;
• Type 2 : ψ(ΘE ,ΘD) = ∑L `=1‖w·,`‖22 + ‖w′·,`‖22; • Type 3 : ψ(ΘE) = ∑L `=1‖w·,`‖22;
Figure 6 shows the results of these experiments. First of all, we observe that the type 1 regularisation does not work satisfactorily. One interpretation of this is that the manifold in the training data is “discontinuous”, and therefore there are no close neighbours for the disks on the edge of the unobserved region. Therefore, this regularisation is to be avoided in cases where there are significant holes in the sampling of the data manifold. The second type of regularisation, minimising the `2 norm of the encoder and decoder weights, produces an interesting effect. Indeed, while the manifold seems reasonable, upon closer inspection, the code z increases in amplitude during the training. Thus, the network cannot converge to a stable solution, which worsens the quality of the results. Finally, we observe that regularising the weights of the encoder works particularly well, and that the resulting manifold is continuous and correctly represents the area of the disks. Consequently, this asymmetrical regularisation approach is to be encouraged in other applications of autoencoders.
At this point, we take the opportunity to note that the clear, marked effects seen with the different regularisation approaches are consistently observed in different training runs. This is due in large part to the controlled, simple setting of autoencoding with disks. Indeed, many other more sophisticated networks, especially GANs, are known to be very difficult to trainSalimans et al. (2016), leading to unstable results or poor reproducibility. We believe that our approach can be of use to more high-level applications, by making it easier to clearly identify which components and regularisations schemes best help in processing complex input data.
3.3 CONCLUSION AND FUTURE WORK
We have investigated in detail the specific mechanisms which allow autoencoders to encode image information in an optimal manner in the specific case of disks. We have shown that, in this case, the encoder functions by integrating over disk, and so the code z represents the area of the disk. In the case where the autoencoder is trained with no bias, the decoder learns a single function which is multiplied by scalar depending on the input. We have shown that this function corresponds to the optimal function. The bias is then used to induce a thresholding process applied to ensure the disk is correctly decoded. We have also illustrated certain limitations of the autoencoder with respect to generalisation when datapoints are missing in the training set. This is especially problematic for higher-level applications, whose data have higher intrinsic dimensionality and therefore are more likely to include such “holes”. Finally, we identify a regularisation approach which is able to overcome this problem particularly well. This regularisation is asymmetrical as it consists of regularizing the encoder while leaving more freedom to the decoder.
An important future goal is to extend the theoretical analyses obtained to increasingly complex visual objects, in order to understand whether the same mechanisms remain in place. We have experimented with other simple geometric objects such as squares and ellipses, with similar results in an optimal code size. Another question is how the decoder functions with the biases included. This requires a careful study of the different non-linearity activations as the radius increases. Finally, the ultimate goal of these studies is to determine the capacity of autoencoders to encode and generate images representing more complex objects or scenes. As we have seen, the proposed framework can help identifying some limitations of complex networks such as the one from Zhu et al. (2016) and future works should investigate whether this framework can help developing the right regularization scheme or architecture.
A DECODING OF A DISK
During the training of the autoencoder for the case of disks (with no bias in the autoencoder), the objective of the decoder is to convert a scalar into the image of a disk with the `2 distance as a metric. Given the profiles of the output of the autoencoder, we have made the hypothesis that the decoder approximates a disk of radius r with a function y(t; r) = h(r)f(t), where f is a continuous function. We show that this is true experimentally in Figure 7 by determining f experimentally by taking the average of all output profiles, and showing the pointwise division of f by randomly selected output profiles. We see that h is approximately constant for varying t and fixed r. Please note that we have removed the last spatial coordinate of the profile which suffers from border effects.
We now compare the numerical optimisation of the energy in Equation (7) using a gradient descent approach with the profile obtained by the autoencoder without biases. The resulting comparison can be seen in Figure 8. One can also derive a closed form solution of Equation (7) by means of the Euler-Lagrange equation and see that the optimal f for Equation (7) is the solution of the differential equation y′′ = −kty with initial state (y, y′) = (1, 0), where k is a free positive constant that accommodates for the position of the first zero of y. This gives a closed form of the f in terms of Airy functions.
B AUTOENCODING DISKS WITH A DATABASE WITH A LIMITED OBSERVED RADIUS
In Figure 9, we see the grey-levels of the input/output of an autoencoder trained (without biases) on a restricted database, that is to say a database whose disks have a maximum radius R which is smaller than the image width. We have used R = 18 for these experiments. We see that the decoder learns a useful function f which only extends to this maximum radius. Beyond this radius, another function is used corresponding to the other sign of codes (see proposition 1) that is not tuned.
C AUTOENCODING DISKS WITH THE IGAN ZHU ET AL. (2016)
In Figure 10, we show the autoencoding results of the IGAN network of Zhu et al. We trained their network with a code size of both z = 100 and z = 1. Although the IGAN works better in the latter case, in both experiments the network fails to correctly autoencode disks in the missing radius region which has not been observed in the training database. | 1. What are the strengths and weaknesses of the paper regarding its idea and comprehensiveness?
2. How does the reviewer suggest visualizing the input data space, and what insights could be gained from doing so?
3. What are the reviewer's concerns regarding the training data size and its impact on the study's conclusions?
4. How would the intermediate feature maps appear, and why does the reviewer ask this question?
5. Does the reviewer think the network architecture is too deep for the data characteristics and size of the training set? Why or why not?
6. What reservations does the reviewer have regarding the claim that other shapes will be explored in future work?
7. How might batch normalization and dropout affect the study's results, according to the reviewer?
8. Why does the reviewer consider the size of 'd' critical for autoencoders, and how does this relate to the paper's findings? | Review | Review
1. The idea is interesting, but the study is not comprehensive yet
2. need to visualize the input data space, with the training data, test data, the 'gaps' in training data [see a recent related paper - Stoecklein et al. Deep Learning for Flow Sculpting: Insights into Efficient Learning using Scientific Simulation Data. Scientific Reports 7, Article number: 46368 (2017).].
3. What's the effect of training data size?
4. How do the intermediate feature maps look like?
5. Is there an effect of number of layers? Maybe the network architecture is too deep for the simple data characteristics and size of training set.
6. Other shapes are said to be part of future work, but I am not convinced that serious conclusions can be drawn from this study only?
7. What about the possible effects of Batch normalization and dropout?
8. size of 'd' is critical for autoencoders, only one example in appendix does not do justice, also it seems other color channels show up in the results (fig 10), wasn't it binary input? |
ICLR | Title
Taking Apart Autoencoders: How do They Encode Geometric Shapes ?
Abstract
We study the precise mechanisms which allow autoencoders to encode and decode a simple geometric shape, the disk. In this carefully controlled setting, we are able to describe the specific form of the optimal solution to the minimisation problem of the training step. We show that the autoencoder indeed approximates this solution during training. Secondly, we identify a clear failure in the generalisation capacity of the autoencoder, namely its inability to interpolate data. Finally, we explore several regularisation schemes to resolve the generalisation problem. Given the great attention that has been recently given to the generative capacity of neural networks, we believe that studying in depth simple geometric cases sheds some light on the generation process and can provide a minimal requirement experimental setup for more complex architectures.
1 INTRODUCTION
Autoencoders are neural networks, often convolutional neural networks, whose purpose is twofold. Firstly, to compress some input data by transforming it from the input domain to another space, known as the latent, or code, space. The second goal of the autoencoder is to take this latent representation and transform it back to the original space, such that the output is similar, with respect to some criterion, to the input. One of the main objectives of this learning process being to reveal important structure in the data via the latent space, and therefore to represent this data in a more meaningful fashion or one that is easier to model. Autoencoders have been proven to be extremely useful in many tasks ranging from image compression to synthesis. Many variants on the basic idea of autoencoders have been proposed, the common theme being how to impose useful properties on the learned latent space. However, very little is known about the actual inner workings and mechanisms of the autoencoder.
The goal of this work is to investigate these mechanisms and describe how the autoencoder functions. Many applications of autoencoders or similar networks consider relatively high-level input objects, ranging from the MNIST handwritten digits to abstract sketches of conceptual objects (Zhu et al. (2016); Ha & Eck (2017)). Here, we take a radically different approach. We consider, in depth, the encoding/decoding processes of a simple geometric shape, the disk, and investigate how the autoencoder functions in this case. There are several important advantages to such an approach. Firstly, since the class of objects we consider has an explicit parametrisation, it is possible to describe the “optimal” performance of the autoencoder, ie. can it compress and uncompress a disk to and from a code space of dimensionality 1 ? Secondly, the setting of this study fixes certain architecture characteristics of the network, such as the number of layers, leaving fewer free parameters to tune. This means that the conclusions which we obtain are more likely to be robust than in the case of more high-level applications. Finally, it is easier to identify the roles of different components of the network, which enables us to carry out an instructive ablation study.
Using this approach, we show that the autoencoder approximates the theoretical solution of the training problem when no biases are involved in the network. Secondly, we identify certain limitations in the generalisation capacity of autoencoders when the training database is incomplete with respect to the underlying manifold. We observe the same limitation using the architecture of Zhu et al. (2016), which is considerably more complex and is proposed to encode natural images. Finally, we analyse several regularisation schemes and identify one in particular which greatly aids in overcoming this generalisation problem.
2 PRIOR WORK
The concept of autoencoders has been present for some time in the learning community (LeCun (1987); Bourlard & Kamp (1988)). The objective is to train two networks, an “encoder” and a “decoder”, which transform the input data to and from a code, or latent, space which is learned by the algorithm. In many applications, the dimensionality d of the latent space is smaller than that of the original data, so that the autoencoder is encouraged to discover useful features of the data. In practice, we obviously do not know the exact value of d, but we would still like to impose as much structure in the latent space as possible. This idea lead to the regularisation in the latent space of autoencoders, which comes in several flavours. The first is the sparse autoencoder (Ranzato et al. (2007)), which attempts to have as few active (non-zero) neurons as possible in the network. This can be done either by modifying the loss function to include sparsity-inducing penalisations, or by acting directly on the values of the code z. In the latter option, one can use rectified linear units (ReLUs) to encourage zeros in the code (Glorot et al. (2011)) or simply specifying a maximum number of non-zero values as in the “k-sparse” autoencoder (Makhzani & Frey (2013)). Another approach, taken by the variational autoencoder, is to specifying the a priori distribution of the code z. Kingma & Welling (2014) use the Kullback-Leibler divergence to achieve this goal, and the authors suppose a Gaussian distribution of z. The “contractive” autoencoder (Rifai et al. (2011)) encourages the derivatives of the code with respect to the input image to be small, meaning that the representation of the image should be robust to small changes in the input.
Autoencoders can be applied to a variety of problems, such as denoising (“denoising autoencoder”) or image compression (Ballé et al. (2016)). For a good overview of autoencoders, see the book of Goodfellow et al. (Goodfellow et al. (2016)). Recently, a great deal of attention has been given to the capacity of CNNs, and in particular generative adversarial networks (GANs) (Radford et al. (2015)) or autoencoders, to generate new images. It is well-known that these networks have important limitations, such as the tendency to produce low quality images or to reproduce images from the training set because of mode collapse. But despite these limitations, many works have investigated the generative capacity of such networks, see for instance Dosovitskiy & Brox (2016); Salimans et al. (2016); Reed et al. (2016); Zhu et al. (2016) and often demonstrated intriguing visual results. In this context, a natural question is : how efficient are such networks at inventing realistic new images ? How well do they generalize visual content ?
3 HOW DO AUTOENCODERS PROCESS VISUAL IMAGES ?
Although autoencoders have been extensively studied, very little is known concerning the actual inner mechanics of these networks, in other words quite simply, how they work. This is obviously much too vast a question in the general case, however very often deep learning is applied to the specific case of images. In this work, we aim to discover how, with a cascade of simple operations common in deep networks, an autoencoder can encode and decode very simple images. In view of this goal, we propose to study in depth the case of disks of variable radii. This controlled setting and
careful study of the autoencoder are the main goals of the paper, and structure our work throughout. Before continuing, we describe our autoencoder in a more formal fashion.
3.1 NOTATION AND AUTOENCODER ARCHITECTURE
We denote input images with x ∈ Rm×n and z ∈ Rd, wherem and n are the height and the width of the image, respectively, and d is the dimension of z. The autoencoder consists of the couple (E,D), the encoder and decoder which transform to and from the “code” space, with E : Rm×n → Rd and D : Rd → Rm×n. As mentioned, the goal of the auto-encoder is to compress and uncompress a signal into a representation with a smaller dimensionality, while losing as little information as possible. Thus, we search for the parameters of the encoder and the decoder, which we denote with ΘE and ΘD respectively, by minimising
(ΘE ,ΘD) = argmin ΘE ,ΘD ∑ x ||x−D(E(x))||22 (1)
The autoencoder consists of a series of convolutions with filters of small compact support, subsampling/up-sampling, biases and non-linearities. The values of the filters are termed the weights of the network, and we denote the encoding filters with w`,i, where ` is the layer number and i the number of the filter. Similarly, we denote the decoding filters w′`,i, the encoding and decoding biases b`,i and b′`,i. We choose leaky ReLUs for the non-linearities :
φα(x) = { x, for x ≥ 0 αx, for x < 0 , (2)
with parameter α = 0.2. Thus, the output of a given encoding layer is given by
El+1i = φα(E l ∗ w`,i + b`,i), (3)
and similarly for the decoding layers (except for an zero-padding upsampling prior to the convolution) , with weights and biases w′ and b′, respectively.
We consider images of a fixed (square) spatial support Ω = [0,m− 1]× [0,m− 1] and also that the subsampling rate s is fixed. In the encoder, subsampling is carried out until and z is a single scalar. Thus, the number of layers in our encoder and decoder is not an independent parameter. We set the support of all the convolutional filters in our network to 3× 3. The architecture of our autoencoder remains the same throughout the paper, and is shown in Figure 1. We summarise our parameters in Table 1. We now investigate the inner mechanics of autoencoders in the case of a simple geometric shape: the disk.
3.2 AUTOENCODING DISKS
Our training set consists of binary images of centred disks of random radii, with one disk per image in the test database. Each disk image is determined by the indicator function of a disk of radius r, and is therefore binary. Theoretically, an optimal encoder would only need one scalar to represent the image. Therefore the architecture in Figure 1 is set up to ensure a code size d = 1. Our first important observation (see Figure 2) is that not only can the network learn to encode/decode
disks, but that the code z which is learned can be interpolated and the corresponding decoding is meaningful. Thus, in this case, the autoencoder is able to encode/decode the data in an optimal fashion. We now proceed to see how the autoencoder actually works on a detailed level, starting with the encoding step.
3.2.1 ENCODING A DISK
Encoding a centred disk of a certain radius to a scalar z can be done in several ways, the most intuitive being integrating over the area of the disk (encoding a scalar proportionate to its area) or integrating over the perimeter of the disk (encoding a scalar proportionate to its radius). The empirical evidence given by our experiments points towards the first option, since z seems to represent the area and not the radius of the input disks (see Figure 2). If this is the case, the integration operation can be done by means of a simple cascade of linear filters. As such, we should be able to encode the disks with a network containg only convolutions and sub-sampling, and no having non-linearities. We have verified experimentally this with such an encoder.
3.2.2 DECODING A DISK
A more difficult question is how does the autoencoder convert a scalar, z, to an output disk of a certain size (the decoding process). One approach to understanding the inner workings of autoencoders, and indeed any neural network, is to remove certain elements of the network and to see how it responds, otherwise known as an ablation study. We found that removing the biases of the autoencoder leads to very interesting observations. While, as we have shown, the encoder is perfectly able to function without these biases, this is not the case for the decoder. Figure 3 shows the results of this ablation. The decoder learns to spread the energy of z in the output according to a certain function g. Thus, the goal of the biases is to shift the intermediary (hidden layer) images such that a cut-off can be carried out to create a satisfactory decoding. We have investigated the behaviour of the decoder without biases in detail. In particular, we will derive an explicit form for the energy minimized by the network, for which a closed form solution can be found (see Appendix A), but more importantly for which we will show experimentally that the network finds the right solution. We first make a general observation about this configuration (without biases).
Proposition 1. [Positive Multiplicative Action of the Decoder Without Bias] Consider a decoder, without biases D(z) = DL ◦ · · · ◦ D1(z), with D`+1 = φα ( U(D`) ∗ w′`i ) , where U stands for upsampling with zero-padding. In this case, the decoder acts multiplicatively on z, meaning that
∀z, ∀λ ∈ R+, D(λz) = λD(z).
Proof. For a fixed z and for any λ > 0. We have
D1(λz) = φα (U(λz) ∗ w′`) = max (λ(U(z) ∗ w′`), 0) + αmin (λ(U(z) ∗ w′`), 0) = λmax (U(z) ∗ w′`, 0) + λαmin (U(z) ∗ w′`, 0) = λφα (U(z) ∗ w′`) = λD1(z). (4)
This reasoning can be applied successively to each layer up to the output y. When the code z is one dimensional, the decoder can be summarized as two linear functions, one for positive codes and a second one for the negative codes. However, in all our experiments, the autoencoder without bias has chosen to use only one possible sign for the code, resulting in a linear decoder.
Furthermore, the profiles in Figure 3 suggest that a single function is learned, and that this function is multiplied by a factor which is constant for each radius. In light of Proposition 1, this means that the decoder has chosen a fixed sign for the code and that the decoder is linear. This can be expressed as y(t, r) = h(r)f(t), (5) where t is a spatial variable and r ∈ (0, m2 ] is the radius of the disk. This is checked experimentally in Figure 7 in Appendix A. In this case, we can write the optimisation problem of the decoder as
f̂ , ĥ = argmin f,h ∫ R 0 ∫ Ω (h(r)f(t)− 1Br (t))2 dt dr, (6)
where R is the maximum radius observed in the training set, Ω = [0,m − 1] × [0,m − 1] is the image domain, andBr is the disk of radius r. Note that we have expressed the minimisation problem for continuous functions f . This is not strictly the case, especially for images of small disk radii, however for our purposes the approximation is good. In this case, we have the following proposition.
Proposition 2 (Decoding Energy for an autoencoder without Biases). The decoding training problem of the autoencoder without biases has an optimal solution f̂ that is radially symmetric and
maximises the following energy:∫ R 0 (∫ r 0 f(ρ)1[0,r](ρ) ρ dρ )2 dr =: E(f), (7)
under the (arbitrary) normalization ‖f‖22 = 1.
Proof. When f is fixed, the optimal h for Equation (6) is given by
ĥ(r) = 〈f,1Br 〉 ‖f‖22 , (8)
where 〈f,1Br 〉 = ∫ Ω f(t)1Br (t) dt. After replacing this in Equation (6), we find that
f̂ = argmin f ∫ R 0 −〈f,1Br 〉 2 ‖f‖2 dr = argminf ∫ R 0 −〈f,1Br 〉22 dr, (9)
where we have chosen the arbitrary normalisation ‖f‖22 = 1. The form of the last equation shows that the optimal solution is obviously radially symmetric1. Therefore, after a change of variables, the energy maximised by the decoder can be written as∫ R
0 (∫ r 0 f(ρ)1[0,r](ρ) ρ dρ )2 dr =: E(f), (10)
such that ‖f‖22 = 1. In Appendix A, we compare the numerical solution of this problem with the actual profile learned by the network, yielding a very close match. This result is very interesting, since it shows that the training process has achieved the optimal solution, in spite of the fact that the loss is non convex.
3.2.3 GENERALISATION AND REGULARISATION
As we have recalled in Section2, many works have recently investigated the generative capacity of autoencoders or GANs. Nevertheless, it is not clear that these architectures truly invent or generalize some visual content. A simpler question is : to what extent is the network able to generalise a simple geometric notion ? In this section, we address this issue in our restricted but interpretable case.
1If not, then consider its mean on every circle, which decreases the L2 norm of f while maintaining the scalar product with any disk. We then can increase back the energy by deviding by this smaller L2 norm according to ‖f‖2 = 1.
For this, we study the behaviour of our autoencoder when examples are removed from the training dataset. In Figure 4, we show the autoencoder result when the disks with radii above a certain threshold R are removed. The radii of the left three images (with a green border) are present in the training database, whereas the radii of the right three (red border) have not been observed. It is clear that the network lacks the capacity to extrapolate further than this radius. Indeed, the autoencoder seems to project these disks onto smaller, observed, disks, rather than learning the abstraction of a disk.
Again by removing the biases from the network, we may explain why the autoencoder fails to extrapolate when a maximum radius R is imposed. In Appendix B, we show experimental evidence that in this situation, the autoencoder learns a function f whose support is restricted by the value of R, leading to the autoencoder’s failure. However, a fair criticism of the previous experiment is simply that the network (and deep learning in general) is not designed to work on data which lie outside of the domain observed in the training data set. Nevertheless, it is reasonable to expect the network to be robust to such “holes” inside the domain. Therefore, we have also analysed the behaviour of the autoencoder when we removed training datapoints whose disks’ radii lie within a certain range, between 11 and 18 pixels (out of a total of 32). We then attempt to reconstruct these points in the test data. Figure 5 shows the results of this experiment. Once again, in the unknown regions the network is unable to recreate the input disks. Goodfellow et al. (2016) (page 521) and Bengio & Monperrus (2005) propose several explanations in the deep learning literature of this phenomenon, such as a high curvature of the underlying data manifold, noisy data or high intrinsic dimensionality of the data. In our setting, none of these explanations is sufficient. Thus we conclude that, even in the simple setting of disks, the “classic” autoencoder cannot generalise correctly when a database contains holes.
This behavior is potentially problematic for applications which deal with more complex natural images, lying on a high-dimensional manifold, as these are likely to contain such holes. We have therefore carried out the same experiments using the state-of-the-art “iGAN” approach of Zhu et al. (2016), which is in turn based on the work of Radford et al. (2015), “DCGAN”. The visual results of their algorithm are displayed in Appendix C. We trained their network using both a code size of d = 100 (as proposed by the authors), and d = 1 in order to ensure fair comparisons. Indeed, in our case, not only the dimension of the latent space should be d = 1, but also the amount of training data is not enough to work with d = 100. Although the d = 1 case leads to improved results, in
both cases the network fails to correctly autoencode the disks belonging to the unobserved region. This shows that the generalisation problem is likely to be ubiquitous, and indeed observed in more sophisticated networks, designed to learn natural images manifolds, even in the simple case of disks. We therefore believe that this issue deserves careful attention. Actually this experiment suggets that the capacity to generate new and simple geometrical shapes could be taken as a minimal requirement for a given architecture.
In order to address the problem, we now investigate several regularisation techniques whose goal is to aid the generalisation capacity of neural networks.
3.2.4 REGULARISATION
We would like to impose some structure on the latent space in order to interpolate correctly in the case of missing datapoints. This is often achieved via some sort of regularisation. This regularisation can come in many forms, such as imposing a certain distribution in the latent space, as in variational autoencoders (Kingma & Welling (2014)), or by encouraging z to be sparse, as in sparse auto-encoders (Ranzato et al. (2007); Makhzani & Frey (2013)). In the present case, the former is not particularly useful, since a probabilistic approach will not encourage the latent space to correctly interpolate. The latter regularisation does not apply, since we already have d = 1. Another commonly used approach is to impose an `2 penalisation of the weights of the filters in the network. The idea behind this bears some similarity to sparse regularisation; we wish for the latent space to be as “simple” as possible, and therefore hope to avoid over-fitting.
We have implemented several regularisation techniques on our network. Firstly, we attempt a simple regularisation of the latent space by requiring a “locality-preservation” property as suggested in Hadsell et al. (2006); Alain & Bengio (2014); Liao et al. (2017), namely that the `2 distance between two images (x,x′) be maintained in the latent space. This is done by randomly selecting a neighbour of each element in the training batch. Secondly, we regularise the weights of the encoder and/or the
decoder. Thus, our training attempts to minimise the sum of the data term, ‖x−D(E(x))‖22, and a regularisation term λψ(x, θ), which can take one of the following forms:
• Type 1 : ψ(x, x′) = (‖x− x′‖22 − ‖E(x)− E(x′)‖22)2;
• Type 2 : ψ(ΘE ,ΘD) = ∑L `=1‖w·,`‖22 + ‖w′·,`‖22; • Type 3 : ψ(ΘE) = ∑L `=1‖w·,`‖22;
Figure 6 shows the results of these experiments. First of all, we observe that the type 1 regularisation does not work satisfactorily. One interpretation of this is that the manifold in the training data is “discontinuous”, and therefore there are no close neighbours for the disks on the edge of the unobserved region. Therefore, this regularisation is to be avoided in cases where there are significant holes in the sampling of the data manifold. The second type of regularisation, minimising the `2 norm of the encoder and decoder weights, produces an interesting effect. Indeed, while the manifold seems reasonable, upon closer inspection, the code z increases in amplitude during the training. Thus, the network cannot converge to a stable solution, which worsens the quality of the results. Finally, we observe that regularising the weights of the encoder works particularly well, and that the resulting manifold is continuous and correctly represents the area of the disks. Consequently, this asymmetrical regularisation approach is to be encouraged in other applications of autoencoders.
At this point, we take the opportunity to note that the clear, marked effects seen with the different regularisation approaches are consistently observed in different training runs. This is due in large part to the controlled, simple setting of autoencoding with disks. Indeed, many other more sophisticated networks, especially GANs, are known to be very difficult to trainSalimans et al. (2016), leading to unstable results or poor reproducibility. We believe that our approach can be of use to more high-level applications, by making it easier to clearly identify which components and regularisations schemes best help in processing complex input data.
3.3 CONCLUSION AND FUTURE WORK
We have investigated in detail the specific mechanisms which allow autoencoders to encode image information in an optimal manner in the specific case of disks. We have shown that, in this case, the encoder functions by integrating over disk, and so the code z represents the area of the disk. In the case where the autoencoder is trained with no bias, the decoder learns a single function which is multiplied by scalar depending on the input. We have shown that this function corresponds to the optimal function. The bias is then used to induce a thresholding process applied to ensure the disk is correctly decoded. We have also illustrated certain limitations of the autoencoder with respect to generalisation when datapoints are missing in the training set. This is especially problematic for higher-level applications, whose data have higher intrinsic dimensionality and therefore are more likely to include such “holes”. Finally, we identify a regularisation approach which is able to overcome this problem particularly well. This regularisation is asymmetrical as it consists of regularizing the encoder while leaving more freedom to the decoder.
An important future goal is to extend the theoretical analyses obtained to increasingly complex visual objects, in order to understand whether the same mechanisms remain in place. We have experimented with other simple geometric objects such as squares and ellipses, with similar results in an optimal code size. Another question is how the decoder functions with the biases included. This requires a careful study of the different non-linearity activations as the radius increases. Finally, the ultimate goal of these studies is to determine the capacity of autoencoders to encode and generate images representing more complex objects or scenes. As we have seen, the proposed framework can help identifying some limitations of complex networks such as the one from Zhu et al. (2016) and future works should investigate whether this framework can help developing the right regularization scheme or architecture.
A DECODING OF A DISK
During the training of the autoencoder for the case of disks (with no bias in the autoencoder), the objective of the decoder is to convert a scalar into the image of a disk with the `2 distance as a metric. Given the profiles of the output of the autoencoder, we have made the hypothesis that the decoder approximates a disk of radius r with a function y(t; r) = h(r)f(t), where f is a continuous function. We show that this is true experimentally in Figure 7 by determining f experimentally by taking the average of all output profiles, and showing the pointwise division of f by randomly selected output profiles. We see that h is approximately constant for varying t and fixed r. Please note that we have removed the last spatial coordinate of the profile which suffers from border effects.
We now compare the numerical optimisation of the energy in Equation (7) using a gradient descent approach with the profile obtained by the autoencoder without biases. The resulting comparison can be seen in Figure 8. One can also derive a closed form solution of Equation (7) by means of the Euler-Lagrange equation and see that the optimal f for Equation (7) is the solution of the differential equation y′′ = −kty with initial state (y, y′) = (1, 0), where k is a free positive constant that accommodates for the position of the first zero of y. This gives a closed form of the f in terms of Airy functions.
B AUTOENCODING DISKS WITH A DATABASE WITH A LIMITED OBSERVED RADIUS
In Figure 9, we see the grey-levels of the input/output of an autoencoder trained (without biases) on a restricted database, that is to say a database whose disks have a maximum radius R which is smaller than the image width. We have used R = 18 for these experiments. We see that the decoder learns a useful function f which only extends to this maximum radius. Beyond this radius, another function is used corresponding to the other sign of codes (see proposition 1) that is not tuned.
C AUTOENCODING DISKS WITH THE IGAN ZHU ET AL. (2016)
In Figure 10, we show the autoencoding results of the IGAN network of Zhu et al. We trained their network with a code size of both z = 100 and z = 1. Although the IGAN works better in the latter case, in both experiments the network fails to correctly autoencode disks in the missing radius region which has not been observed in the training database. | 1. What is the main contribution of the paper?
2. What are the strengths and weaknesses of the proposed approach?
3. How does the reviewer assess the novelty and significance of the work?
4. Are there any concerns or limitations regarding the experimental design or analysis?
5. What are the implications of the results, and how do they contribute to the field?
6. Is there any suggestion for future research or improvement? | Review | Review
The paper considers a toy problem: the space of images of discs of variable radius - a one dimensional manifold.
An autoencoder based on convolutional layers with ReLU is experimented with, with a 1D embedding.
It is shown that
1) if the bias is not included, the resulting function is homogeneous (meaning f(ax)=af(x)), and so it fails because the 1D representation should be the radius, and the relationship from radius to image is more complex than a homogeneous function.
- if we include the bias and L2 regularise only the encoder weights, it works better in terms of interpolation for a limited data sample.
The thing is that 1) is trivial (the composition of homogeneous functions is homogeneous... so their proof is overly messy btw). Then, they continue by further analysing (see proposition 2) the solution for this case. Such analysis does not seem to shed much light on anything relevant, given that we know the autoencoder fails in this case due to the trivial proposition 1.
Another point: since the homogeneous function problem will not arise for other non-linearities (such as the sigmoid), the focus on the bias as the culprit seems arbitrary.
Then, the story about interpolation and regularisation is kind of orthogonal, and then is solved by an arbitrary regularisation scheme. The lesson learned from this case is basically the second last paragraph of section 3.2. In other words, it just works.
Since it's a toy problem anyway, the insights seem somewhat trivial.
On the plus side, such a toy problem seems like it might lead somewhere interesting. I'd like to see a similar setup but with a suite of toy problems. e.g. vary the aspect ratio of an oval (rather than a disc), vary the position, intensity, etc etc. |
ICLR | Title
Taking Apart Autoencoders: How do They Encode Geometric Shapes ?
Abstract
We study the precise mechanisms which allow autoencoders to encode and decode a simple geometric shape, the disk. In this carefully controlled setting, we are able to describe the specific form of the optimal solution to the minimisation problem of the training step. We show that the autoencoder indeed approximates this solution during training. Secondly, we identify a clear failure in the generalisation capacity of the autoencoder, namely its inability to interpolate data. Finally, we explore several regularisation schemes to resolve the generalisation problem. Given the great attention that has been recently given to the generative capacity of neural networks, we believe that studying in depth simple geometric cases sheds some light on the generation process and can provide a minimal requirement experimental setup for more complex architectures.
1 INTRODUCTION
Autoencoders are neural networks, often convolutional neural networks, whose purpose is twofold. Firstly, to compress some input data by transforming it from the input domain to another space, known as the latent, or code, space. The second goal of the autoencoder is to take this latent representation and transform it back to the original space, such that the output is similar, with respect to some criterion, to the input. One of the main objectives of this learning process being to reveal important structure in the data via the latent space, and therefore to represent this data in a more meaningful fashion or one that is easier to model. Autoencoders have been proven to be extremely useful in many tasks ranging from image compression to synthesis. Many variants on the basic idea of autoencoders have been proposed, the common theme being how to impose useful properties on the learned latent space. However, very little is known about the actual inner workings and mechanisms of the autoencoder.
The goal of this work is to investigate these mechanisms and describe how the autoencoder functions. Many applications of autoencoders or similar networks consider relatively high-level input objects, ranging from the MNIST handwritten digits to abstract sketches of conceptual objects (Zhu et al. (2016); Ha & Eck (2017)). Here, we take a radically different approach. We consider, in depth, the encoding/decoding processes of a simple geometric shape, the disk, and investigate how the autoencoder functions in this case. There are several important advantages to such an approach. Firstly, since the class of objects we consider has an explicit parametrisation, it is possible to describe the “optimal” performance of the autoencoder, ie. can it compress and uncompress a disk to and from a code space of dimensionality 1 ? Secondly, the setting of this study fixes certain architecture characteristics of the network, such as the number of layers, leaving fewer free parameters to tune. This means that the conclusions which we obtain are more likely to be robust than in the case of more high-level applications. Finally, it is easier to identify the roles of different components of the network, which enables us to carry out an instructive ablation study.
Using this approach, we show that the autoencoder approximates the theoretical solution of the training problem when no biases are involved in the network. Secondly, we identify certain limitations in the generalisation capacity of autoencoders when the training database is incomplete with respect to the underlying manifold. We observe the same limitation using the architecture of Zhu et al. (2016), which is considerably more complex and is proposed to encode natural images. Finally, we analyse several regularisation schemes and identify one in particular which greatly aids in overcoming this generalisation problem.
2 PRIOR WORK
The concept of autoencoders has been present for some time in the learning community (LeCun (1987); Bourlard & Kamp (1988)). The objective is to train two networks, an “encoder” and a “decoder”, which transform the input data to and from a code, or latent, space which is learned by the algorithm. In many applications, the dimensionality d of the latent space is smaller than that of the original data, so that the autoencoder is encouraged to discover useful features of the data. In practice, we obviously do not know the exact value of d, but we would still like to impose as much structure in the latent space as possible. This idea lead to the regularisation in the latent space of autoencoders, which comes in several flavours. The first is the sparse autoencoder (Ranzato et al. (2007)), which attempts to have as few active (non-zero) neurons as possible in the network. This can be done either by modifying the loss function to include sparsity-inducing penalisations, or by acting directly on the values of the code z. In the latter option, one can use rectified linear units (ReLUs) to encourage zeros in the code (Glorot et al. (2011)) or simply specifying a maximum number of non-zero values as in the “k-sparse” autoencoder (Makhzani & Frey (2013)). Another approach, taken by the variational autoencoder, is to specifying the a priori distribution of the code z. Kingma & Welling (2014) use the Kullback-Leibler divergence to achieve this goal, and the authors suppose a Gaussian distribution of z. The “contractive” autoencoder (Rifai et al. (2011)) encourages the derivatives of the code with respect to the input image to be small, meaning that the representation of the image should be robust to small changes in the input.
Autoencoders can be applied to a variety of problems, such as denoising (“denoising autoencoder”) or image compression (Ballé et al. (2016)). For a good overview of autoencoders, see the book of Goodfellow et al. (Goodfellow et al. (2016)). Recently, a great deal of attention has been given to the capacity of CNNs, and in particular generative adversarial networks (GANs) (Radford et al. (2015)) or autoencoders, to generate new images. It is well-known that these networks have important limitations, such as the tendency to produce low quality images or to reproduce images from the training set because of mode collapse. But despite these limitations, many works have investigated the generative capacity of such networks, see for instance Dosovitskiy & Brox (2016); Salimans et al. (2016); Reed et al. (2016); Zhu et al. (2016) and often demonstrated intriguing visual results. In this context, a natural question is : how efficient are such networks at inventing realistic new images ? How well do they generalize visual content ?
3 HOW DO AUTOENCODERS PROCESS VISUAL IMAGES ?
Although autoencoders have been extensively studied, very little is known concerning the actual inner mechanics of these networks, in other words quite simply, how they work. This is obviously much too vast a question in the general case, however very often deep learning is applied to the specific case of images. In this work, we aim to discover how, with a cascade of simple operations common in deep networks, an autoencoder can encode and decode very simple images. In view of this goal, we propose to study in depth the case of disks of variable radii. This controlled setting and
careful study of the autoencoder are the main goals of the paper, and structure our work throughout. Before continuing, we describe our autoencoder in a more formal fashion.
3.1 NOTATION AND AUTOENCODER ARCHITECTURE
We denote input images with x ∈ Rm×n and z ∈ Rd, wherem and n are the height and the width of the image, respectively, and d is the dimension of z. The autoencoder consists of the couple (E,D), the encoder and decoder which transform to and from the “code” space, with E : Rm×n → Rd and D : Rd → Rm×n. As mentioned, the goal of the auto-encoder is to compress and uncompress a signal into a representation with a smaller dimensionality, while losing as little information as possible. Thus, we search for the parameters of the encoder and the decoder, which we denote with ΘE and ΘD respectively, by minimising
(ΘE ,ΘD) = argmin ΘE ,ΘD ∑ x ||x−D(E(x))||22 (1)
The autoencoder consists of a series of convolutions with filters of small compact support, subsampling/up-sampling, biases and non-linearities. The values of the filters are termed the weights of the network, and we denote the encoding filters with w`,i, where ` is the layer number and i the number of the filter. Similarly, we denote the decoding filters w′`,i, the encoding and decoding biases b`,i and b′`,i. We choose leaky ReLUs for the non-linearities :
φα(x) = { x, for x ≥ 0 αx, for x < 0 , (2)
with parameter α = 0.2. Thus, the output of a given encoding layer is given by
El+1i = φα(E l ∗ w`,i + b`,i), (3)
and similarly for the decoding layers (except for an zero-padding upsampling prior to the convolution) , with weights and biases w′ and b′, respectively.
We consider images of a fixed (square) spatial support Ω = [0,m− 1]× [0,m− 1] and also that the subsampling rate s is fixed. In the encoder, subsampling is carried out until and z is a single scalar. Thus, the number of layers in our encoder and decoder is not an independent parameter. We set the support of all the convolutional filters in our network to 3× 3. The architecture of our autoencoder remains the same throughout the paper, and is shown in Figure 1. We summarise our parameters in Table 1. We now investigate the inner mechanics of autoencoders in the case of a simple geometric shape: the disk.
3.2 AUTOENCODING DISKS
Our training set consists of binary images of centred disks of random radii, with one disk per image in the test database. Each disk image is determined by the indicator function of a disk of radius r, and is therefore binary. Theoretically, an optimal encoder would only need one scalar to represent the image. Therefore the architecture in Figure 1 is set up to ensure a code size d = 1. Our first important observation (see Figure 2) is that not only can the network learn to encode/decode
disks, but that the code z which is learned can be interpolated and the corresponding decoding is meaningful. Thus, in this case, the autoencoder is able to encode/decode the data in an optimal fashion. We now proceed to see how the autoencoder actually works on a detailed level, starting with the encoding step.
3.2.1 ENCODING A DISK
Encoding a centred disk of a certain radius to a scalar z can be done in several ways, the most intuitive being integrating over the area of the disk (encoding a scalar proportionate to its area) or integrating over the perimeter of the disk (encoding a scalar proportionate to its radius). The empirical evidence given by our experiments points towards the first option, since z seems to represent the area and not the radius of the input disks (see Figure 2). If this is the case, the integration operation can be done by means of a simple cascade of linear filters. As such, we should be able to encode the disks with a network containg only convolutions and sub-sampling, and no having non-linearities. We have verified experimentally this with such an encoder.
3.2.2 DECODING A DISK
A more difficult question is how does the autoencoder convert a scalar, z, to an output disk of a certain size (the decoding process). One approach to understanding the inner workings of autoencoders, and indeed any neural network, is to remove certain elements of the network and to see how it responds, otherwise known as an ablation study. We found that removing the biases of the autoencoder leads to very interesting observations. While, as we have shown, the encoder is perfectly able to function without these biases, this is not the case for the decoder. Figure 3 shows the results of this ablation. The decoder learns to spread the energy of z in the output according to a certain function g. Thus, the goal of the biases is to shift the intermediary (hidden layer) images such that a cut-off can be carried out to create a satisfactory decoding. We have investigated the behaviour of the decoder without biases in detail. In particular, we will derive an explicit form for the energy minimized by the network, for which a closed form solution can be found (see Appendix A), but more importantly for which we will show experimentally that the network finds the right solution. We first make a general observation about this configuration (without biases).
Proposition 1. [Positive Multiplicative Action of the Decoder Without Bias] Consider a decoder, without biases D(z) = DL ◦ · · · ◦ D1(z), with D`+1 = φα ( U(D`) ∗ w′`i ) , where U stands for upsampling with zero-padding. In this case, the decoder acts multiplicatively on z, meaning that
∀z, ∀λ ∈ R+, D(λz) = λD(z).
Proof. For a fixed z and for any λ > 0. We have
D1(λz) = φα (U(λz) ∗ w′`) = max (λ(U(z) ∗ w′`), 0) + αmin (λ(U(z) ∗ w′`), 0) = λmax (U(z) ∗ w′`, 0) + λαmin (U(z) ∗ w′`, 0) = λφα (U(z) ∗ w′`) = λD1(z). (4)
This reasoning can be applied successively to each layer up to the output y. When the code z is one dimensional, the decoder can be summarized as two linear functions, one for positive codes and a second one for the negative codes. However, in all our experiments, the autoencoder without bias has chosen to use only one possible sign for the code, resulting in a linear decoder.
Furthermore, the profiles in Figure 3 suggest that a single function is learned, and that this function is multiplied by a factor which is constant for each radius. In light of Proposition 1, this means that the decoder has chosen a fixed sign for the code and that the decoder is linear. This can be expressed as y(t, r) = h(r)f(t), (5) where t is a spatial variable and r ∈ (0, m2 ] is the radius of the disk. This is checked experimentally in Figure 7 in Appendix A. In this case, we can write the optimisation problem of the decoder as
f̂ , ĥ = argmin f,h ∫ R 0 ∫ Ω (h(r)f(t)− 1Br (t))2 dt dr, (6)
where R is the maximum radius observed in the training set, Ω = [0,m − 1] × [0,m − 1] is the image domain, andBr is the disk of radius r. Note that we have expressed the minimisation problem for continuous functions f . This is not strictly the case, especially for images of small disk radii, however for our purposes the approximation is good. In this case, we have the following proposition.
Proposition 2 (Decoding Energy for an autoencoder without Biases). The decoding training problem of the autoencoder without biases has an optimal solution f̂ that is radially symmetric and
maximises the following energy:∫ R 0 (∫ r 0 f(ρ)1[0,r](ρ) ρ dρ )2 dr =: E(f), (7)
under the (arbitrary) normalization ‖f‖22 = 1.
Proof. When f is fixed, the optimal h for Equation (6) is given by
ĥ(r) = 〈f,1Br 〉 ‖f‖22 , (8)
where 〈f,1Br 〉 = ∫ Ω f(t)1Br (t) dt. After replacing this in Equation (6), we find that
f̂ = argmin f ∫ R 0 −〈f,1Br 〉 2 ‖f‖2 dr = argminf ∫ R 0 −〈f,1Br 〉22 dr, (9)
where we have chosen the arbitrary normalisation ‖f‖22 = 1. The form of the last equation shows that the optimal solution is obviously radially symmetric1. Therefore, after a change of variables, the energy maximised by the decoder can be written as∫ R
0 (∫ r 0 f(ρ)1[0,r](ρ) ρ dρ )2 dr =: E(f), (10)
such that ‖f‖22 = 1. In Appendix A, we compare the numerical solution of this problem with the actual profile learned by the network, yielding a very close match. This result is very interesting, since it shows that the training process has achieved the optimal solution, in spite of the fact that the loss is non convex.
3.2.3 GENERALISATION AND REGULARISATION
As we have recalled in Section2, many works have recently investigated the generative capacity of autoencoders or GANs. Nevertheless, it is not clear that these architectures truly invent or generalize some visual content. A simpler question is : to what extent is the network able to generalise a simple geometric notion ? In this section, we address this issue in our restricted but interpretable case.
1If not, then consider its mean on every circle, which decreases the L2 norm of f while maintaining the scalar product with any disk. We then can increase back the energy by deviding by this smaller L2 norm according to ‖f‖2 = 1.
For this, we study the behaviour of our autoencoder when examples are removed from the training dataset. In Figure 4, we show the autoencoder result when the disks with radii above a certain threshold R are removed. The radii of the left three images (with a green border) are present in the training database, whereas the radii of the right three (red border) have not been observed. It is clear that the network lacks the capacity to extrapolate further than this radius. Indeed, the autoencoder seems to project these disks onto smaller, observed, disks, rather than learning the abstraction of a disk.
Again by removing the biases from the network, we may explain why the autoencoder fails to extrapolate when a maximum radius R is imposed. In Appendix B, we show experimental evidence that in this situation, the autoencoder learns a function f whose support is restricted by the value of R, leading to the autoencoder’s failure. However, a fair criticism of the previous experiment is simply that the network (and deep learning in general) is not designed to work on data which lie outside of the domain observed in the training data set. Nevertheless, it is reasonable to expect the network to be robust to such “holes” inside the domain. Therefore, we have also analysed the behaviour of the autoencoder when we removed training datapoints whose disks’ radii lie within a certain range, between 11 and 18 pixels (out of a total of 32). We then attempt to reconstruct these points in the test data. Figure 5 shows the results of this experiment. Once again, in the unknown regions the network is unable to recreate the input disks. Goodfellow et al. (2016) (page 521) and Bengio & Monperrus (2005) propose several explanations in the deep learning literature of this phenomenon, such as a high curvature of the underlying data manifold, noisy data or high intrinsic dimensionality of the data. In our setting, none of these explanations is sufficient. Thus we conclude that, even in the simple setting of disks, the “classic” autoencoder cannot generalise correctly when a database contains holes.
This behavior is potentially problematic for applications which deal with more complex natural images, lying on a high-dimensional manifold, as these are likely to contain such holes. We have therefore carried out the same experiments using the state-of-the-art “iGAN” approach of Zhu et al. (2016), which is in turn based on the work of Radford et al. (2015), “DCGAN”. The visual results of their algorithm are displayed in Appendix C. We trained their network using both a code size of d = 100 (as proposed by the authors), and d = 1 in order to ensure fair comparisons. Indeed, in our case, not only the dimension of the latent space should be d = 1, but also the amount of training data is not enough to work with d = 100. Although the d = 1 case leads to improved results, in
both cases the network fails to correctly autoencode the disks belonging to the unobserved region. This shows that the generalisation problem is likely to be ubiquitous, and indeed observed in more sophisticated networks, designed to learn natural images manifolds, even in the simple case of disks. We therefore believe that this issue deserves careful attention. Actually this experiment suggets that the capacity to generate new and simple geometrical shapes could be taken as a minimal requirement for a given architecture.
In order to address the problem, we now investigate several regularisation techniques whose goal is to aid the generalisation capacity of neural networks.
3.2.4 REGULARISATION
We would like to impose some structure on the latent space in order to interpolate correctly in the case of missing datapoints. This is often achieved via some sort of regularisation. This regularisation can come in many forms, such as imposing a certain distribution in the latent space, as in variational autoencoders (Kingma & Welling (2014)), or by encouraging z to be sparse, as in sparse auto-encoders (Ranzato et al. (2007); Makhzani & Frey (2013)). In the present case, the former is not particularly useful, since a probabilistic approach will not encourage the latent space to correctly interpolate. The latter regularisation does not apply, since we already have d = 1. Another commonly used approach is to impose an `2 penalisation of the weights of the filters in the network. The idea behind this bears some similarity to sparse regularisation; we wish for the latent space to be as “simple” as possible, and therefore hope to avoid over-fitting.
We have implemented several regularisation techniques on our network. Firstly, we attempt a simple regularisation of the latent space by requiring a “locality-preservation” property as suggested in Hadsell et al. (2006); Alain & Bengio (2014); Liao et al. (2017), namely that the `2 distance between two images (x,x′) be maintained in the latent space. This is done by randomly selecting a neighbour of each element in the training batch. Secondly, we regularise the weights of the encoder and/or the
decoder. Thus, our training attempts to minimise the sum of the data term, ‖x−D(E(x))‖22, and a regularisation term λψ(x, θ), which can take one of the following forms:
• Type 1 : ψ(x, x′) = (‖x− x′‖22 − ‖E(x)− E(x′)‖22)2;
• Type 2 : ψ(ΘE ,ΘD) = ∑L `=1‖w·,`‖22 + ‖w′·,`‖22; • Type 3 : ψ(ΘE) = ∑L `=1‖w·,`‖22;
Figure 6 shows the results of these experiments. First of all, we observe that the type 1 regularisation does not work satisfactorily. One interpretation of this is that the manifold in the training data is “discontinuous”, and therefore there are no close neighbours for the disks on the edge of the unobserved region. Therefore, this regularisation is to be avoided in cases where there are significant holes in the sampling of the data manifold. The second type of regularisation, minimising the `2 norm of the encoder and decoder weights, produces an interesting effect. Indeed, while the manifold seems reasonable, upon closer inspection, the code z increases in amplitude during the training. Thus, the network cannot converge to a stable solution, which worsens the quality of the results. Finally, we observe that regularising the weights of the encoder works particularly well, and that the resulting manifold is continuous and correctly represents the area of the disks. Consequently, this asymmetrical regularisation approach is to be encouraged in other applications of autoencoders.
At this point, we take the opportunity to note that the clear, marked effects seen with the different regularisation approaches are consistently observed in different training runs. This is due in large part to the controlled, simple setting of autoencoding with disks. Indeed, many other more sophisticated networks, especially GANs, are known to be very difficult to trainSalimans et al. (2016), leading to unstable results or poor reproducibility. We believe that our approach can be of use to more high-level applications, by making it easier to clearly identify which components and regularisations schemes best help in processing complex input data.
3.3 CONCLUSION AND FUTURE WORK
We have investigated in detail the specific mechanisms which allow autoencoders to encode image information in an optimal manner in the specific case of disks. We have shown that, in this case, the encoder functions by integrating over disk, and so the code z represents the area of the disk. In the case where the autoencoder is trained with no bias, the decoder learns a single function which is multiplied by scalar depending on the input. We have shown that this function corresponds to the optimal function. The bias is then used to induce a thresholding process applied to ensure the disk is correctly decoded. We have also illustrated certain limitations of the autoencoder with respect to generalisation when datapoints are missing in the training set. This is especially problematic for higher-level applications, whose data have higher intrinsic dimensionality and therefore are more likely to include such “holes”. Finally, we identify a regularisation approach which is able to overcome this problem particularly well. This regularisation is asymmetrical as it consists of regularizing the encoder while leaving more freedom to the decoder.
An important future goal is to extend the theoretical analyses obtained to increasingly complex visual objects, in order to understand whether the same mechanisms remain in place. We have experimented with other simple geometric objects such as squares and ellipses, with similar results in an optimal code size. Another question is how the decoder functions with the biases included. This requires a careful study of the different non-linearity activations as the radius increases. Finally, the ultimate goal of these studies is to determine the capacity of autoencoders to encode and generate images representing more complex objects or scenes. As we have seen, the proposed framework can help identifying some limitations of complex networks such as the one from Zhu et al. (2016) and future works should investigate whether this framework can help developing the right regularization scheme or architecture.
A DECODING OF A DISK
During the training of the autoencoder for the case of disks (with no bias in the autoencoder), the objective of the decoder is to convert a scalar into the image of a disk with the `2 distance as a metric. Given the profiles of the output of the autoencoder, we have made the hypothesis that the decoder approximates a disk of radius r with a function y(t; r) = h(r)f(t), where f is a continuous function. We show that this is true experimentally in Figure 7 by determining f experimentally by taking the average of all output profiles, and showing the pointwise division of f by randomly selected output profiles. We see that h is approximately constant for varying t and fixed r. Please note that we have removed the last spatial coordinate of the profile which suffers from border effects.
We now compare the numerical optimisation of the energy in Equation (7) using a gradient descent approach with the profile obtained by the autoencoder without biases. The resulting comparison can be seen in Figure 8. One can also derive a closed form solution of Equation (7) by means of the Euler-Lagrange equation and see that the optimal f for Equation (7) is the solution of the differential equation y′′ = −kty with initial state (y, y′) = (1, 0), where k is a free positive constant that accommodates for the position of the first zero of y. This gives a closed form of the f in terms of Airy functions.
B AUTOENCODING DISKS WITH A DATABASE WITH A LIMITED OBSERVED RADIUS
In Figure 9, we see the grey-levels of the input/output of an autoencoder trained (without biases) on a restricted database, that is to say a database whose disks have a maximum radius R which is smaller than the image width. We have used R = 18 for these experiments. We see that the decoder learns a useful function f which only extends to this maximum radius. Beyond this radius, another function is used corresponding to the other sign of codes (see proposition 1) that is not tuned.
C AUTOENCODING DISKS WITH THE IGAN ZHU ET AL. (2016)
In Figure 10, we show the autoencoding results of the IGAN network of Zhu et al. We trained their network with a code size of both z = 100 and z = 1. Although the IGAN works better in the latter case, in both experiments the network fails to correctly autoencode disks in the missing radius region which has not been observed in the training database. | 1. What is the main contribution of the paper regarding autoencoders?
2. What are the limitations of the proposed approach, particularly in terms of architecture and regularization choices?
3. How does the reviewer assess the novelty and significance of the paper's findings compared to prior works?
4. Are there any questions regarding the task proposed in the paper, such as its representativeness or the choice of activation functions?
5. Do you think that the proposed regularization method is effective enough, and how does it compare to other regularization schemes? | Review | Review
This paper proposes a simple task (learning the manifold of all the images of disks) to study some properties of Autoencoders. They show that Autoencoders don't generalize to disks of radius not in the training set and propose several regularization to improve generalisation.
The task proposed in the paper is interesting but the study made is somewhat limited:
- They only studied one choice of Autoencoder architecture, and the results shown depends heavily on the choice of the activation, in particular sigmoid should not suffer from the same problem.
- It would be interesting to study the generalization in terms of the size of the gap.
- The regularization proposed is quite simple and already known, and other regularization have been proposed (e.g. dropout, ...). A more detailed comparison with all previous regularization scheme would be much needed.
- The choice of regularization at the end seems quite arbitrary, it works better on this example but it's not clear at all why, and if this choice would work for other tasks.
Also Denoising Autoencoders (Pascal et al.) should probably be mentioned in the previous work section, as they propose a solution to the regularization of Autoencoder.
Overall nothing really new was discovered or proposed, the lack of generalization of those kind of architecture is a well known problem and the regularization proposed was already known. |
ICLR | Title
Learning to Infer Run-Time Invariants from Source code
Abstract
Source code is notably different from natural language in that it is meant to be executed. Experienced developers infer complex “invariants" about run-time state while reading code, which helps them to constrain and predict program behavior. Knowing these invariants can be helpful; yet developers rarely encode these explicitly, so machine-learning methods don’t have much aligned data to learn from. We propose an approach that adapts cues within existing if-statements regarding explicit run-time expectations to generate aligned datasets of code and implicit invariants. We also propose a contrastive loss to inhibit generation of illogical invariants. Our model learns to infer a wide vocabulary of invariants for arbitrary code, which can be used to detect and repair real bugs. This is entirely complementary to established approaches, which either use logical engines that scale poorly, or run-time traces that are expensive to obtain; when present, that data can complement our tool, as we demonstrate in conjunction with Daikon, an existing tool. Our results show that neural models can derive useful representations of run-time behavior directly from source code.
1 INTRODUCTION
Software maintenance requires reading a lot of code. Experienced developers are adept at this, garnering rich semantics just from this “static” (viz, without running the code) inspection to find complex bugs, predict a function’s outputs from its inputs, and learn new coding patterns. They strongly rely on generic assumptions about the program’s run-time behavior; e.g., that a list index never escapes the list bounds and strictly increases. Such “invariants” capture general, yet relevant constraints on the program’s expected run-time behavior.
Automatically inferring invariants can help both developers and tools: first, they can be used to detect bugs where explicit assumptions are incorrect or implicit ones ought to be explicit; second, invariants can guide myriad other tools, such as test-case generators (Artzi et al., 2006). However, inferring invariants is not tractable in general and sound approximations don’t scale beyond very small programs. Instead, popular tools either use dynamic trace data from real executions (esp. Daikon (Ernst et al., 2007)), which requires costly instrumentation, or focuses on highly constrained cases such as loops (Sharma et al., 2013a; Padhi et al., 2016).
Yet this scalability obstacle may be largely artificial. Practical programs rarely take on an exponential range of values (e.g., integers tend to come in a bounded range), and developers seem able to make such inferences without undertaking a project-scale analysis. Rather, they reliably extract them from a local context, using their past experience and cues from the code itself. Consider the snippet in Figure 1: the program on the right uses a time variable, returned from one method and passed to another. Not only is ‘time’ generally non-negative, in this particular case we should not update a position (using moments dx, dy) if no time has passed either. This inference, and many more, can quickly be made from reading just these lines of code. Other times, such implicit inferences should be made explicit: this snippet was later repaired by adding the guard on the left.
Based on this observed symmetry between explicitly guarded code and implicit run-time assumptions about code, we propose a model that learns invariants directly from static code. As developers rarely “assert” invariants in their code, we train this model using a proxy, by automatically converting explicitly guarded code to its implicitly guarded counterpart across millions of functions. The generated programs are constrained to be similar to real functions and used to train a large model with a new loss function that is aware of logical constraints.
Our model, BODYGUARD predicts a rich vocabulary of conditions about arbitrary code from new projects, and can be used to find & fix real missing-guard bugs, such as the one in Figure 1, with over 69% (repair) precision at 10% inspection cost. It also predicts more than two-thirds of Daikon’s invariants that could previously only be inferred with run-time data, and some entirely new ones that can be validated automatically with trace data. Our work presents a significant next step in learned static analysis, being the first to reliably produce natural invariants from arbitrary code alone. More broadly, we show that learned models can implicitly represent behavioral semantics, just from code.
2 OVERVIEW
Inferring invariants for arbitrary programs is NP-hard. Sound approaches using theorem proofers are therefore constrained to restricted settings, such as simple loops (Sharma et al., 2013a), or ones with known inputs (Pham et al., 2017). Such approaches generally don’t scale: needing SMT solvers limits tools to the few program points where invariants can be proven, and ground-truth inputs typically need to be constructed by hand. An alternative is to use execution traces (Ernst et al., 2007): when realistic workloads are available (e.g. from test suites), they generally span entire systems. However, genuinely representative workloads are rare, so trace-based tools often generate poor invariants (Kim & Petersen). A key concern is that none of these have a notion of relevance, or naturalness of the actual statements (Hellendoorn et al., 2019a).
To address these gaps, we propose a learned invariant generator that predicts directly from code, trained with realistic examples. Our central claim is that the natural distribution of programs includes many groups of similar functions, some of which assert run-time assumptions explicitly, and with much detail, while others vary along these dimensions. As Figure 1 highlights, it is common for code not to state salient conditions (time > 0, on the right) that developers may naturally intuit, while other times (e.g. in a later revision, on the left), such conditions are explicitly checked. If this distributional assumption holds in general, then we can use explicit conditional checks that guard blocks in functions to teach our models about the implicit invariants of unguarded blocks in similar functions. Furthermore, we conjecture that in such comparable samples, the condition is both salient (since it is checked explicitly) and natural (since it is written by humans). Learning from such examples is thus a very appropriate training signal for inferring practically useful invariants.
Figure 2 illustrates our data generation: we find explicitly guarded blocks in functions that can be removed without substantially perverting the program, and convert these checked cases to implicit ones (Section 3.1). We garner a large aligned dataset to learn to predict the reverse of this mapping, training a Transformer-style model for code, augmented with a loss that encourages sampling logical conditions (Section 3.2). This model, nick-named BODYGUARD, works on any (Java) function, quickly adapting to the local vocabulary and semantics, and has a natural inclination to generate realistic, salient invariants that are often valid (Section 4). This result fits in a long line of observations that programming is remarkably predictable, including in its syntax (Hindle et al., 2012) and execution values (Tsimpourlas et al., 2020), likely by developers’ design, to control the complexity of the task (Casalnuovo et al., 2019). Yet none of these relate code and its execution directly, as we do through translating the former into general, intuitively meaningful statements about the latter.
Condition
3 APPROACH
Training and evaluating this approach required a substantial experimental setup: we collect three datasets for three types of evaluations and introduce an improved loss function. This section describes the data collection, evaluation, and modeling setup generally; Appendices A.1 and A.2 provide additional details on our datasets and modeling architecture, respectively. Our benchmark datasets, code, and models are available at http://omitted.link.
3.1 DATASETS
To train BODYGUARD, we generate ca. 2.5 million aligned invariant/function samples from methods with if-statements. We extract these from top-starred Java projects from Github, which we split at the organization level into training (920 projects), held-out (19 projects), and test data (61 projects). Each file was parsed to extract all its methods, from which we generate one sample for each (sideeffect free) if- (or if-else-)statement by removing said guard and storing its condition. This produces an equivalent code fragment in which the statement’s condition is presumed to either be always true (if its body is kept) or false (otherwise). Correspondingly, the omitted condition (or its negation) becomes an invariant on the remaining code. The resultant sample contains the entire method (minus conditional check) as context, with the range of tokens where the invariant condition applies indicated.
We train our model to generate run-time conditions for any indicated segment of code in Java functions. We evaluate its ability to do so in two settings: 1. identifying and repairing missing explicit if-guards, collected from real bug reports, and 2. measuring the validity of our predicted invariants using trace data, collected with Daikon (Ernst et al., 2007). For the first, we collect a dataset of real missing if-condition bugs from across the history of 10K Java projects by parsing all the revisions in these projects’ histories and selecting for changes that a) introduce a single if-statement to guard previously un-guarded code, and b) are described as a bug-fixing change (see Appendix A.1.3 for details). We find ca. three thousand of these. For the second evaluation, we use Daikon to collect execution trace data from a smaller set of eight projects that we manually instrumented. We then compare our predictions to both those generated by Daikon, to measure overlap, and to the collected traces directly, to assess the validity of the invariants that we uniquely generate. This helps us understand the inference gap between static and dynamic information; i.e., is run-time data (when present) strictly more useful than code, or are the two information sources orthogonal?
3.2 MODEL SETUP
Discovering invariants is non-trivial even for experienced developers, so we both equip our models with substantial capacity and training time, and design to prioritize precision over recall. Figure 3 shows an overview of the architecture, inputs and outputs of our model.
3.2.1 ARCHITECTURE
We base our architecture on the Transformer (Vaswani et al., 2017), amplified with the relation attention mechanism from Hellendoorn et al. (2020). While standard (lexical) language models are quite useful for code, Allamanis et al. (2018) and others have shown that utilizing syntatic & semantic information such as the AST, or control/data-flow relations, outperforms text-only models. Hellendoorn et al. (2020) propose a Transformer-based architecture that handles such relations but is faster to train and more powerful than graph neural networks (Allamanis et al., 2018). Their model relies on an added attention bias brij , injected into the query-key comparison of the Transformer’s conventional scaled dot-product attention: eij = (qi+brij)kj >/ √ N . This bias is sensitive to known relations r between tokens i and j (if any, and summed together if more than one), allowing the model to selectively sharpen (or dampen) the significance of each relation. We adopt this model for our work, specifically with 512-dimensional hidden states, 64-dimensional relational embeddings, 8 attention heads, and 8 layers, totaling ca. 67M parameters.
Our model uses relational information in the form of program graphs. A program graph extractor has been released for C# code (Allamanis et al., 2018), but not yet for Java, so we created our own. Specifically, we extract 5 commonly used edge types, all bi-directional, reflecting common lexical, syntactic, and semantic relations in programs (detailed in Appendix A.2.1). We use the same “leaves-only” representation as Hellendoorn et al. (2020) to limit the size of our inputs by not including non-terminal AST nodes, but instead rerouting edges that connect such nodes to representative syntax token (e.g. from an if-statement node to its “if” token in the code). Finally, to ensure that our decoder is aware of the specified range of code tokens where the invariant applies, we also leverage the relational mechanism between the decoder and encoder, using a simple unary relation (i.e., that a token is part of the invariant’s range) between the generated tokens and input tokens.
3.2.2 DECODING LOGICAL STATEMENTS
We synthesize training data using a proxy for invariants, which necessarily introduces some bias towards characteristics of if-conditions (and the code they guard) that is incompatible with true invariants. Most notably, in code, small syntactic differences lead to drastic changes in run-time
behavior. It is common for if-else statements to have quite similar bodies, for which we generate two samples: one with the if-condition as an invariant for the if block, and one with its logical negation for the else block. This approach tends to produce very similar code fragments with very similar, but logically opposite (e.g. ‘!= null’ vs. ‘== null’) conditions.
We supervise our model to encourage its representations for syntactically close but semantically opposite statements to be distinct by introducing a contrastive hinge loss term. For every training sample, we produce the logical negation of the invariant and require the decoder to produce that negation with a much higher entropy than the original. Concretely, given a statement inv comprised of tokens ti and a negating function neg, we use the regular cross-entropy loss LCE :
LCE(inv) = − |inv|∑ i=1 log prob(ti | t1 · · · ti−1, context)
to compute the entropy distance w.r.t. its negation:
∆inv = LCE(neg(inv))− LCE(inv) Lhinge(inv) = max (0,∆inv − )2
in which is the minimum desired entropy “distance” in bits. In this work, we set = 2. For this hinge-loss model, as we will call it in the rest of this paper, we train with a loss equal toLseq+Lhinge.
4 ANALYSIS
We first assess our model’s precision/recall behavior on our automatically collected corpus; then, we apply it to a promising down-stream task: missing if-guard repair (and detection), which further helps us assess the models’ sensitivity to salient invariants. Finally, we use trace data to get a measure of our invariants’ validity and contrast it with an execution-based tool.
4.1 CORPUS DATA
We sample our two models’ held-out performance every 100,000 samples while training,1 leading to the learning curves shown in Figure 4a. The base model saturates earlier than the one employing a contrastive hinge loss, as the latter faces the more challenging task of distinguishing between very similar statements. However, after ca. one week of training, both models converge to approximately the same quality. It speaks to the challenge of the task that the models only reach ∼30% accuracy, due in part to the enormously diverse vocabulary of statements that occurs across our corpus, and to the inherent ambiguity of generating a single invariant when multiple valid options are available (as
1A full epoch is approximately 2.3M samples for the base model and twice that for the hinge-loss models
we will study later). This task clearly stretches our current models of code to their limits, making it a promising new task to pursue in order to improve our models.
We evaluate each model at the step with their highest held-out accuracy on the test data, where we compare the top generated invariant (from beam search, size = 25) to the ground truth. Figure 4b shows the precision/recall behavior of the two models in the high precision range, which is generally much more useful to developers than high recall. We rank predictions by their entropy: an invariant that is highly likely to be sampled from its context is likely correct. Both models respond strongly to this entropy threshold, becoming especially far more precise when entropy values drop below 1.0 (around 40% recall), and converging to (near) perfect precision, at a commensurate expense of recall. Both break 80% precision at nearly 20% recall, which still accounts for tens of thousands of program points across our test projects alone. Going forward, we use the hinge loss model, which has the better precision-recall trade-off, and prioritize precision over recall.
4.2 MISSING IF DETECTION
Using the ∼3K real missing if-guard bugs collected from project histories (see Section 3.1), we first measure our model’s accuracy and precision at predicting this guard from the localized bug in the top row of Table 1. This most directly related to its training signal, where we provided our model with the location of the code guarded by the targeted invariant. Our model achieves a similar overall accuracy here (ca. 29.3%) as on our general test data.2, and precision at 10% recall is also quite high (69.1%), allowing us to fix 215 out of 311 bugs at that level once located. That these tasks appear to be comparably “hard” is relevant; automatically synthesized training data is often overly easy compared to real tasks, which harms generalization (Hellendoorn et al., 2019b).
We also care about our model’s sensitivity to salience: the missing condition in these samples is (arguably) the most important invariant in the entire method, not just the indicated code block. Our model should be able to detect this given how it was trained. This contrasts with tools like Daikon Ernst et al. (2007), which emit all logically valid invariants, many of which irrelevant (Hellendoorn et al., 2019a). The next three rows of Table 1 show the results of running our invariant generator on every contiguous segment (up to 5 blocks) of code in each buggy method, ranking the top invariants across segments for inspection. This is substantially harder than the previous task, reducing the overall accuracy threefold and roughly halving precision. Nevertheless, that is still much better than might be expected if BODYGUARD had no location-sensitivity: we test over 30 blocks per method on average. We also show that the top prediction often matches some aspect of the correct answer, especially the position, and often predicting the correct invariant at another (nearby) block of code.
Finally, we note that the other (low entropy) invariants predicted here are often not at all “incorrect”; from cursory inspection, many are valid, meaningful statements. We study their validity next.
4.3 VALIDITY AND OVERLAP WITH DAIKON
Learning invariants just from code stands in sharp contrast to most current approaches in this field, prominently including Daikon (Ernst et al., 2007), which learns invariants from execution trace data instead. Collecting trace data requires instrumenting projects and access to diverse, representative workloads. This makes it much harder to apply to arbitrary code than our approach but has the benefit of offering stronger guarantees of correctness. Comparing our model with Daikon in projects where this information is available thus allows for two useful evaluations. First, we can lower-bound
2The base model (trained without hinge loss) reached 26.8% accuracy.
our tool’s true-positive rate by determining how often it replicates Daikon’s own invariants, which we tentatively deem “safe” because they hold on all observed traces and have passed a significance test.3 Second, we can use this trace data directly to determine the validity of (a subset of, see Appendix A.3.2) our invariants that do not overlap with Daikon’s.
Figure 5a shows the first result: the frequency with which our invariants overlap with Daikon’s, again plotted against recall, where the points correspond to entropy threshold ranging from 1e-4 to 10. Evidently, pre-conditions are easier to predict for our model, likely because it has no real notion of post-conditions (see Appendix A.3.2). Even so, our tool can retrieve more than two-thirds of Daikon’s invariants at a respectable 10% recall from static code alone, which is quite promising.
We generate 10 invariants per program point using beam search, so even at a low entropy threshold we produce many pre- and post-condition that Daikon does not (those either out of its vocabulary, or with too few observations). It is reasonable to expect many of these to be valid given previous results. Since Daikon does not provide a means of validating a plain-text invariant, we wrote a simple logical engine that parses Daikon’s trace data files and compares a number of categories of our invariants against the recorded values, such as array length, string equivalence, instanceof checks, etc. Using this approach, we are able to validate ca. 40% (12K) of our emitted invariants, resulting in the validities summarized in Figure 5b. In short, our invariants at full recall are valid ca. 60% of the time, and this validity ratio greatly increases as we sharpen the entropy threshold, to over 80%, at recall values under 10%.
Many of these validated invariants were not produced by Daikon, implying that static and dynamic data are orthogonal for this task. We collected the 708 pre-conditions that BODYGUARD generates at an entropy of ≤0.1; of these, 540 could be checked automatically with trace data, yielding 449 valid and 91 invalid cases. We manually inspected the 168 remaining cases and found that most (122) were valid, but Daikon’s tracer simply did not record the information needed to predict these.4 Overall, this suggests that more than 80% of our invariants at this recall level (3.5%) are correct, and more than two-thirds of the invalid remainder could be ruled out using trace data, if available, leaving a false positive rate of just 6.5% (46/708) when execution data is available (while also adding about 200 valid invariants to Daikon’s own predictions). This supports our belief that our tool is largely orthogonal to, and usefully synergistic with, dynamic, trace-based invariant generators.
3Though in practice it generates a fair number of spurious statements still. 4Some of these were correct statements but not proper pre-conditions, e.g. invariants about a variable declared at the first line of the function. This is an artifact of our training setup, which has no explicit notion of method-level pre-conditions. We marked these as invalid for this analysis.
5 RELATED WORK
Automatically inferring invariants is usually approached either in constrained settings where some “checker” (e.g. an SMT solver) or ground-truth is available, or under the assumption that we have access to execution traces from realistic workloads. Among the first, Sharma et al. (2013b) find algebraic (polynomial) invariants by solving a system of linear equations with an SMT solver and using counterexamples to create new test inputs. Sharma et al. (2013a) use PAC-learning to learn integer loop invariants on programs with a single loop, trained by contrasting passing and failing test cases. Padhi et al. (2016) learn pre-conditions and loop invariants as boolean combinations of arithmetic conditions (“features"), which they synthesize by generating and testing all features up to a size cutoff. This approach is agnostic to the program structure, as is Pham et al. (2017), who use a fixed set of feature templates over state vectors to learn linear inequalities that classify passing and failing state vectors, requiring both post-conditions and passing and failing tests to be in place. In contrast, our work makes no assumptions about the code other than the availability of a parser. In settings where an SMT solver (or test cases) is available, it could be used to filter invalid invariants generated by BODYGUARD.
Among machine learning based approaches, Si et al. (2018) use policy-learning to teach a GNN to generate loop invariants in cooperation with an SMT solver (Z3), which provides intermediate rewards (through counterexamples) to finesse the sparsity of the eventual reward (the final validity of the invariant). A second reward is added to reject “meaningless" and “trivial" predicates such e == e or e < e. Besides not requiring an SMT solver, our approach learns notions like “relevant” and “natural” directly from real code. Relatedly, Brockschmidt et al. (2017) also use GNNs to induce invariants over data structures, using a similar approach of generating invariants (in separation logic) supervised by data produced from test runs. The production is based on hand-engineered features over the data-structure graphs. Both these approaches may be symbiotic with ours where tests or logical constraints are known, although they consider different classes of invariants.
Daikon (Ernst et al., 2007) belongs to the second class of invariant predictors, leveraging execution traces from realistic inputs to infer a large vocabulary of method pre- and post-conditions. This general applicability has led to its frequent as a basis for other tools, often to generate an initial corpus of invariants for tasks such as automated patching (Perkins et al., 2009) and test generation (Artzi et al., 2006; Pacheco & Ernst, 2005). However, truly representative inputs are rare, and using incomplete data risks generating many irrelevant or invalid invariants. Polikarpova et al. (2009) found that the size of the test suite affects the validity of generated invariants on Eiffel programs. Kim & Petersen anecdotally note various issues with Daikon’s invariants on large, C++ systems, such as a high degree of false positives and few insightful invariants. Hellendoorn et al. (2019a) similarly observe (on hand-annotated C# functions) few relevant and valid invariants based on executions from unit test. Our approach learns directly from natural conditions to generate relevant and generalizable conditions, and when trace data is present, it can be used to filter out invalid invariants.
6 CONCLUSION
We conjectured that typically used invariants are in a sense natural, like many other aspects of programs (Hindle et al., 2012; Barr et al., 2013; Tsimpourlas et al., 2020), and therefore predictable, intentionally written in standardized ways for ease of reading and writing Casalnuovo et al. (2019). Our results support this claim: both explicit (if-statements) and implicit (invariants) conditions pertaining to code can be predicted precisely, and with high validity from code reading alone, facilitated by our proposed data generation approach and loss function. As a result, we can generate many invariants that were previously only accessible through trace data (and more), which greatly increases the reach and applicability of invariant inference.
This finding has broad implications: our tool can provide valuable semantic insights both to developers, e.g. to aide debugging efforts or facilitate code understanding, and to other tools, many of which struggle to navigate an exponentially large search space of programs. Our tool can help bias this search space using highly likely assertions, which could greatly improve the range and quality of solutions found by downstream applications. In summary, our novel approach learns to reason about program state by synthesizing training data from if-conditions; this empowers BODYGUARD to reliably generate useful invariants entirely from static code.
A APPENDIX
A.1 DATA COLLECTION DETAILS
We base our evaluation on a Java dataset consisting of the top 10,000 most-starred Java projects on Github, collected March 30th, 2020 using the Github v3 API. Since generating our training data samples is quite expensive, we used just the top 1,000 (most starred) of these projects to automatically generate training and evaluation samples for the results described in Section 4.1. This dataset was split between training, held-out and evaluation sets at the organization level to ensure minimal duplication, as projects within the same organization often share many coding patterns (Allamanis, 2019). We allocated 95% of organizations (920 projects) to training data, 2% to held-out data (19 projects), and 3% to test data (61 projects), to assess the final trained models.
A.1.1 INVARIANT GENERATION
We parse each file using Eclipse’s JDT parser and extract all (non-nested) methods from the resulting parse tree. Within each method, we detect all if-statements, removing all those whose conditions contain side-effects (such as assignments, increment/decrement operators, and non-whitelisted methods, see Appendix A.1.2), and those whose body contains a control-flow altering statement (e.g. return, throw) unless it is the sole statement.5 For the remainder, we generate samples based on the following types of if-statements:
Simple if-statements: these include samples like Figure 1, in which a single if-statement guards a simple body with no control-flow altering code.
If-else statements: for these we generate two samples: one in which we remove the else block entirely and generate an if-invariant as above, and one in which we negate the condition and generate an invariant for just the else block. Note that else if statements in Java are treated as nested statements and thus handled the same way.
Control-flow altering if-statements: any if-statement whose body prevents the execution of subsequent code, by containing just a return, break, continue, or throw (Exception) statement, is treated as declaring an invariant (namely, the negation of the if-condition) for the ensuing code.
In all cases, the surrounding context is the entire method, and the range of tokens to which the condition applies (namely, those that used to be guarded) is stored with the sample. We generate samples for all these conditions, producing a new sample for every if-statement. This ensures that each sample minimally alters the original code, which reduces the risk that we produce unnatural code (which would harm the generalization of our model). As such, a method can produce many samples, so functions with many conditions will be represented proportionally more often. We do not consider that problematic, as 1. long functions tend to have correspondingly more invariants, so the increased emphasis should be beneficial to our model, and 2. we anyways cap our training samples to only modestly large functions (up to 500 sub-tokens, which typically translates to the order of 20 lines), due to memory constraints.
A.1.2 PRODUCING NATURAL FUNCTIONS
Not all if-guards can be removed without changing the semantics of the code; conditions can have side-effects. This includes assignments (e.g. if ((x = y) != null)), certain operators (viz. ++ and --) and method calls with side-effects. To ensure that the converted code is semantically coherent, and because invariants should not have side effects anyways, we omit all such cases. Many method calls do not have side effects, so to avoid limiting our dataset too much, we heuristically select a large, but relatively “safe” set of these based on common coding patterns. This includes common “getter” methods, java.lang.Math calls, object equality tests, collection inspection methods, such as inclusion checks (e.g. ‘contains’, ‘has’) and size-related methods, and a few miscellaneous others that were common in our training data (e.g. parseInt, name). The regexes used to detect these various types of methods are listed in Table 2.
Removing if-statements does not always yield meaningful code, consider: int foo(int x, int y) {
if (x > y) { return x; } return y;
}
5When an if-statement body terminates the current branch of execution only after first executing some other code, generating equivalent unguarded code is complicated: inlining the guarded code (minus the final statement) would often produce very unnatural code, as it tends to involve some form of error-recovery, such as logging or resetting a value. Omitting the entire block instead, as we do for simple control-flow altering statements may be more appropriate; future work can explore this, and various other, corner cases to generate more samples.
If we remove the conditional check, the resulting method is left with just two consecutive return statements, which is invalid in Java. This particular case would trigger a compiler error, but not all inappropriate removals do: if the if-body had instead assigned y = x + 1;, removing it would result in y always being assigned x + 1 before returning, making the parameter useless. Not using a parameter is not erroneous by definition, since the method foo may be inherited (or overriden in a subclass) and other instantiations do make use of it, so Eclipse’s parser just emits a warning. Since both these cases result in code that is both unrepresentative of typical Java, and would yield highly predictable invariants, we additionally reparse each resulting function after removal of the targeted if-statement and discard any changes that trigger compilation warnings and errors.
Specifically, Eclipse JDT requires full type resolution to guarantee correct program analysis and stops checking for violations if it finds compile-time errors from missing types. When processing as many projects as we do (many of which cannot be built automatically), we cannot soundly resolve all dependencies for each project. As a close approximation, we instead parse each function in its entire project context to allow as much heuristic type resolution as possible. Then, we look for any increase in warnings and errors between the method before and after removing an if-statement. This reduces the number of collected samples and increases the time to generate the dataset (to ca. 200 CPU hours for 1K projects), but also increases its validity by eliminating many inappropriate fragments.
Finally, we limit our functions to those having 500 (sub-)tokens or less to facilitate a reasonable modeling throughput. This does not reduce the dataset by much; most functions tend to fit this limit. In total, we collect ca. 2.34M training samples, 12.1K held-out samples and 101K test samples, with approximately 200 sub-tokens per function on average.
A.1.3 COLLECTING “MISSING IF” BUGS
We collect our dataset of missing if-condition bugs from across the history of all the aforementioned 10K projects in our dataset. For each project, we parsed every commit to the main branch, using git’s “diff” function to identify cases in which the sole addition was to wrap one or more existing statements in an if-statement. This yielded 32,471 samples from across 8,174,552 commits. Although all of these may constitute interesting samples, we prioritize bug-detection for now as the most direct application of our model. To ensure that our collected samples are likely bug-related, we focus only on the ca. 3.7K cases in which the entire commit introduced just a single if-statement in a single Java file and the corresponding commit message contained any of the common bug-related terms such as “fix”, “bug”, and “fault” (Ray et al., 2016). We additionally filtered out any commits to projects that were included in our training dataset to avoid the risk of overlap (which need not be present as many commits reflect now out-dated code), yielding 3,146 samples in total.
Project Methods Invariants
A.1.4 RUNNING DAIKON
Comparing our tool to Daikon (Ernst et al., 2007) required some adaptations. Daikon requires projects that are fully built, instrumentable, and have representative workloads. Unit tests are often insufficient because they test for both appropriate and inappropriate values (e.g. those triggering an exception), which is counter to our purpose.6 Scaling Daikon to our aforementioned dataset is not feasible; indeed, to the best of our knowledge there is no large public dataset of Daikon invariants on real programs. Instead, we created a modestly large dataset of our own.
To do so, we leveraged the Dacapo benchmark (Blackburn et al., 2006). Originally created to benchmark program optimizations (e.g. through better compilers), each project in this benchmark comes with a set of representative workloads designed to execute many of its paths. This is ideal for our case. Practically, although the benchmark comes with a single runner for each project, Daikon could not instrument through the reflective calls that this framework uses. Instead, we manually instrumented and ran 8 projects (details in Table 3) in this suite directly, which, in nearly all cases, involved writing our own “runner” to mimic Dacapo’s instrumentation while calling the requisite project-code directly. We then applied Daikon as usual, running the code under instrumentation first and then producing invariants from the resulting traces. Table 3 summarizes the resulting invariant counts.
We limited the volume of the collected trace data by exponentially decreasing the number of traces for each program point once it was seen sufficiently often (10 times) and excluding many values from tracing, such as those that are not visible from the program point of interest and any nested values with more than three levels. Even then, Daikon required upwards of 30GB of RAM and nearly an hour of processing for the larger projects – much more than our models.
A.2 MODELING DETAILS
A.2.1 PROGRAM GRAPH EXTRACTION FOR JAVA
We used Eclipse’s JDT parser with approximate name-binding resolution to extract five edge types across 3 broad categories of information that are accessible in source code:
• Lexical: every token is connected to its neighbors through next-token edges (and their reverse). This adds additional sensitivity to lexically local information beyond the positional encoding used in the standard Transformer.
• Syntactic: we extract all AST parent-child relations, which provide insight into the hierarchical structure of source code.
• Data-flow: we include three types of data-flow edges: next-use edges, which connect lexically sequential uses of the same variable; computed-from edges, which connect any variable usage to the last value it was assigned, and def-use edges, which connect every variable usage to its (single) original declaration point.
In addition, every edge type has a symmetric, mirrored version (e.g. prev-token), yielding a total of 10 distinct edge kinds used by our model.
6In addition, Daikon cannot instrument JUnit-tested code since it uses reflection, which effectively makes Java tests off-limit.
A.2.2 TRAINING DETAILS
Consistent with recent observations regarding effective modeling of source code vocabulary (Hellendoorn & Devanbu, 2017; Karampatsis et al., 2020), we use Byte-Pair Encoding to create a sub-token vocabulary based on the tokens in our training data. Our vocabulary, estimated from the training data, spans 10,000 sub-tokens; both the input function and the predicted invariant are sub-tokenized using this (reversible) dictionary. Transformer models generally scale in memory needs with the square of the size of their inputs. To ensure that our minibatches are sufficiently large to keep the gradients stable, we restrict our inputs to functions with up to 500 (BPE) tokens and our invariants to 50 tokens (although invariants that long are very rare). With these cut-offs, we train batches of up to 12,500 tokens in parallel across two NVidia RTX Titan GPU’s with 24GB of VRAM each. By packing similarly sized functions per batch, we minimize the overhead from padding and are able to fit ca. 70 functions per batch on average.
A.3 EVALUATION DETAILS
A.3.1 IF-CONDITION LOCALIZATION & REPAIR METRICS
Since some methods have far more program blocks than others, simply ranking all invariants across method boundaries by entropy would lead to bigger methods being highly disproportionally represented. Rather, we try to balance method and invariant level inspection cost by simulating the inspection of 10% of invariants in our dataset from a subset of methods. We do so by first ranking methods by the entropy of their top invariant, from low to high, and then inspecting all invariants from these methods in order until we have inspected 10% of all location/invariant pairs in this dataset (which number 73,738). The 10% inspection (recall) level in Table 1 correspond to a threshold of just 0.0233 bits, under which the average method has 55.3 blocks – substantially more than the average method overall. Separating out the functions with 32 or fewer program points (the mean), the overall accuracy increases to 16.3% and the 10% recall precision increases to 50.0% – the joint task is naturally easier on shorter methods.
A.3.2 GENERATING PRE- AND POST-CONDITIONS WITH BODYGUARD
The comparison with Daikon invariants comes with an important caveat: Daikon only generates method pre- and post-conditions. This means that we cannot perfectly classify the validity of all our invariants. Nevertheless, our experiments on missing conditions show that our models are precise at inferring even very specific missing conditions, which strongly suggests (as our manual analysis has too) that many of its other suggestions are valid as well.
Secondly, our tool produces invariants for any syntactic block of code throughout the method and does not have a general mechanism to indicate that pre- or post-conditions are required. To imitate these for our tool, the closest approximation is to mark the entire method body as needing an invariant when a pre-condition is required and the final (return) statement otherwise. To avoid the complexity of having to match multiple return points, or none at all for void methods, we restrict the latter case to methods with a single return statement only. Note that the latter is an imperfect approximation: our tool only learns to predict guards that precede a statement. A guard that it predicts for a return statement may not be an appropriate substitution for true post-conditions but rather a reason to return at that particular point.
A.3.3 MEASURING OVERLAP WITH DAIKON’S INVARIANTS
We quantify the overlap between our predicted invariants and Daikon’s using normalized Cumulative Gain. This metric captures the quality of a ranker in terms of how often it returns relevant elements; it is traditionally used in information retrieval, for example to evaluate a web searcher. Although discounted cumulative gain is more commonly used, we refrain from penalizing based on “rank” of predictions, because there is no reason to assume that Daikon’s invariants are more salient or relevant than others that we predict. That is, all that matters is that Daikon’s invariants are among our (top 10) predictions.
A.4 FURTHER RESULTS
A.4.1 CHARACTERISTICS OF MANUALLY INSPECTED INVARIANTS
A large portion of the manually verified invariants in Section 4.3 corresponded to fairly trivial statements, such as instanceof assertions for a value being cast to the corresponding type. In some cases, our invariants were more general or accurate than Daikon’s; e.g. when BODYGUARD asserts that an object is not null whereas Daikon asserts that a member of that object is not null. At other times, we inferred invariants that Daikon missed entirely, likely due to limitations in its internal rules and heuristics. For instance, as a pre-condition of:
static ReliableFile getReliableFile(File file) throws IOException { if (file.isDirectory()) {
throw new FileNotFoundException(""); } return new ReliableFile(file);
}
BODYGUARD correctly inferred that !file.isDirectory(), while Daikon only offered file != null.
In another case, our tool produced a more specific invariant for this PMD snippet:
public int getPriority() { return priority; }
Here, Daikon asserts that priority level is exactly either 2 or 3, because those are the only observed values in the (evidently unrepresentative) traces off this method. This indicates how Daikon’s invariants can be inaccurate even with available workloads. BODYGUARD more broadly anticipates that priority >= 0, which matches the method’s actual specification as encoded in its Javadoc documentation (which our tool does not use).
A.4.2 FURTHER EXAMPLES
In the below example,7 a badge variable, initialized to null, is first assigned a value based on program state, and then added to two collections (local and, conditionally, global). This second segment, after the switch statement, should have been guarded by a check that badge != null, since not every case assigns it a value. Across all 53 permutations of code blocks (and countless options per block) in this method, BODYGUARD predicts this condition at the correct location at rank 3. Its first prediction was the nonsensical statement !global as a guard for the entire method body. Possibly, no good prediction was possible for that range, so this option had low entropy by sheer contrast with other possibilities. The second ranked prediction was badge == null for every line after the declaration of badge. While this is tautologically valid as a pre-condition for those lines, it highlights the importance of specificity in range – it is only truly invariant for some of these lines, specifically, the start of each case and the break statement of the latter two, a range that is not currently supported by our approach.
public static void validateTutorial() { Badge badge = null; switch (Dungeon.hero.heroClass) {
case WARRIOR: badge = Badge.TUTORIAL_WARRIOR; break; case MAGE: badge = Badge.TUTORIAL_MAGE; break; case ROGUE: break; case HUNTRESS: break;
7Repaired in https://github.com/00-Evan/shattered-pixel-dungeon/commit/ 475d78cd0599a1d39c4708a91fbb30c95b3f3418
} local.add(badge); if (!global.contains(badge)) {
global.add(badge); saveNeeded = true;
} }
The following snippet8 returns a default image, generating it on the first call. Even though the documentation of createBitMap(int, int, Bitmap.Config)9 does not specify it, this method can return null in rare circumstances, such as when a phone runs out of memory and recovers by aborting this call.10 BODYGUARD correctly infers empty != null as the top invariant, having seen similar calls in other Android projects in its training data. Specifically, it predicts this invariant both for the just the line containing empty.eraseColor (rank 1), and for the block including that and the next line (rank 2). The latter is the more correct segment.
private static Bitmap getDefaultThumbnail() { if (defaultImage == null) {
Bitmap empty = Bitmap.createBitmap(160, 200, Bitmap.Config.ARGB_8888); empty.eraseColor(Color.WHITE); defaultImage = paint(empty);
} return defaultImage;
}
A.5 LIMITATIONS
We evaluated our predictions broadly to assess both their salience and validity. Even so, it is hard to automatically assess all of our invariants, especially those inserted in the middle of methods and those whose vocabulary is outside of what Daikon finds. However, the results on the task of predicting missing if-statements, (which avoids these evaluation problems) are quite encouraging; we believe that this bodes well for the more general settings. Future work may better assess validity of our entire vocabulary of invariants, perhaps by injecting asserts corresponding to our predictions into the source code and executing the tests.
Our second main criterion is salience: our predictions should be particularly relevant to the referenced code, in contrast to prior work. We chose to assess this by using real missing if guards, which would appear to be a good example of particularly salient implicit conditions (as developers chose to make them explicit). We did not quantitatively study other types of salience, such as which conditions are most informative or intuitively obvious to real developers. This, too, may be a fruitful area for future work; human subject studies involving invariants have produced worthwhile insights into developer behavior in the past (Staats et al., 2012).
8Repaired in https://github.com/SufficientlySecure/document-viewer/commit/ 680650556340aa15502e1ec375e4255c1c16fb5b
9https://developer.android.com/reference/android/graphics/Bitmap# createBitmap(int,int,android.graphics.Bitmap.Config)
10As suggested at https://stackoverflow.com/a/14778533. | 1. What is the focus and contribution of the paper on code invariants learning?
2. What are the strengths of the proposed approach, particularly in its ability to infer invariants?
3. What are the weaknesses of the paper, especially regarding its evaluation and comparison with other works?
4. How does the reviewer assess the novelty and significance of the proposed method compared to prior works such as NeuRIPS 2018 and ICLR 2020? | Review | Review
The paper presents a method for statically learning code invariants from source code using a variant of transformers.
Strengths
The paper demonstrates that on the synthetic dataset the proposed approach can infer many invariants.
Weaknesses
The evaluation with a synthetic dataset seems very weak. “If” checks are not good proxies for useful invariants as in most programs there are many if checks that are simply unreachable or redundant. In practice, not all invariants are useful for the downstream tasks (code fixing, bug finding, etc.) mentioned by the authors. Without a more direct evaluation, it is very hard to tell how useful the learned invariants actually are for these tasks. The paper will be significantly stronger if the authors can evaluate their tool against existing loop invariant inference datasets with ground truth data like those used in Si et al. (NeuRIPS 2018).
The transformer-based model seems to be directly reused from Hallendoorn et al. (ICLR 2020). Thus the contribution in terms of model design is limited.
The authors also did not cite/compare against the current state-of-the-art loop invariant learning work: CLN2INV: Learning Loop Invariants with Continuous Logic Networks. Ryan et al. ICLR 2020 |
ICLR | Title
Learning to Infer Run-Time Invariants from Source code
Abstract
Source code is notably different from natural language in that it is meant to be executed. Experienced developers infer complex “invariants" about run-time state while reading code, which helps them to constrain and predict program behavior. Knowing these invariants can be helpful; yet developers rarely encode these explicitly, so machine-learning methods don’t have much aligned data to learn from. We propose an approach that adapts cues within existing if-statements regarding explicit run-time expectations to generate aligned datasets of code and implicit invariants. We also propose a contrastive loss to inhibit generation of illogical invariants. Our model learns to infer a wide vocabulary of invariants for arbitrary code, which can be used to detect and repair real bugs. This is entirely complementary to established approaches, which either use logical engines that scale poorly, or run-time traces that are expensive to obtain; when present, that data can complement our tool, as we demonstrate in conjunction with Daikon, an existing tool. Our results show that neural models can derive useful representations of run-time behavior directly from source code.
1 INTRODUCTION
Software maintenance requires reading a lot of code. Experienced developers are adept at this, garnering rich semantics just from this “static” (viz, without running the code) inspection to find complex bugs, predict a function’s outputs from its inputs, and learn new coding patterns. They strongly rely on generic assumptions about the program’s run-time behavior; e.g., that a list index never escapes the list bounds and strictly increases. Such “invariants” capture general, yet relevant constraints on the program’s expected run-time behavior.
Automatically inferring invariants can help both developers and tools: first, they can be used to detect bugs where explicit assumptions are incorrect or implicit ones ought to be explicit; second, invariants can guide myriad other tools, such as test-case generators (Artzi et al., 2006). However, inferring invariants is not tractable in general and sound approximations don’t scale beyond very small programs. Instead, popular tools either use dynamic trace data from real executions (esp. Daikon (Ernst et al., 2007)), which requires costly instrumentation, or focuses on highly constrained cases such as loops (Sharma et al., 2013a; Padhi et al., 2016).
Yet this scalability obstacle may be largely artificial. Practical programs rarely take on an exponential range of values (e.g., integers tend to come in a bounded range), and developers seem able to make such inferences without undertaking a project-scale analysis. Rather, they reliably extract them from a local context, using their past experience and cues from the code itself. Consider the snippet in Figure 1: the program on the right uses a time variable, returned from one method and passed to another. Not only is ‘time’ generally non-negative, in this particular case we should not update a position (using moments dx, dy) if no time has passed either. This inference, and many more, can quickly be made from reading just these lines of code. Other times, such implicit inferences should be made explicit: this snippet was later repaired by adding the guard on the left.
Based on this observed symmetry between explicitly guarded code and implicit run-time assumptions about code, we propose a model that learns invariants directly from static code. As developers rarely “assert” invariants in their code, we train this model using a proxy, by automatically converting explicitly guarded code to its implicitly guarded counterpart across millions of functions. The generated programs are constrained to be similar to real functions and used to train a large model with a new loss function that is aware of logical constraints.
Our model, BODYGUARD predicts a rich vocabulary of conditions about arbitrary code from new projects, and can be used to find & fix real missing-guard bugs, such as the one in Figure 1, with over 69% (repair) precision at 10% inspection cost. It also predicts more than two-thirds of Daikon’s invariants that could previously only be inferred with run-time data, and some entirely new ones that can be validated automatically with trace data. Our work presents a significant next step in learned static analysis, being the first to reliably produce natural invariants from arbitrary code alone. More broadly, we show that learned models can implicitly represent behavioral semantics, just from code.
2 OVERVIEW
Inferring invariants for arbitrary programs is NP-hard. Sound approaches using theorem proofers are therefore constrained to restricted settings, such as simple loops (Sharma et al., 2013a), or ones with known inputs (Pham et al., 2017). Such approaches generally don’t scale: needing SMT solvers limits tools to the few program points where invariants can be proven, and ground-truth inputs typically need to be constructed by hand. An alternative is to use execution traces (Ernst et al., 2007): when realistic workloads are available (e.g. from test suites), they generally span entire systems. However, genuinely representative workloads are rare, so trace-based tools often generate poor invariants (Kim & Petersen). A key concern is that none of these have a notion of relevance, or naturalness of the actual statements (Hellendoorn et al., 2019a).
To address these gaps, we propose a learned invariant generator that predicts directly from code, trained with realistic examples. Our central claim is that the natural distribution of programs includes many groups of similar functions, some of which assert run-time assumptions explicitly, and with much detail, while others vary along these dimensions. As Figure 1 highlights, it is common for code not to state salient conditions (time > 0, on the right) that developers may naturally intuit, while other times (e.g. in a later revision, on the left), such conditions are explicitly checked. If this distributional assumption holds in general, then we can use explicit conditional checks that guard blocks in functions to teach our models about the implicit invariants of unguarded blocks in similar functions. Furthermore, we conjecture that in such comparable samples, the condition is both salient (since it is checked explicitly) and natural (since it is written by humans). Learning from such examples is thus a very appropriate training signal for inferring practically useful invariants.
Figure 2 illustrates our data generation: we find explicitly guarded blocks in functions that can be removed without substantially perverting the program, and convert these checked cases to implicit ones (Section 3.1). We garner a large aligned dataset to learn to predict the reverse of this mapping, training a Transformer-style model for code, augmented with a loss that encourages sampling logical conditions (Section 3.2). This model, nick-named BODYGUARD, works on any (Java) function, quickly adapting to the local vocabulary and semantics, and has a natural inclination to generate realistic, salient invariants that are often valid (Section 4). This result fits in a long line of observations that programming is remarkably predictable, including in its syntax (Hindle et al., 2012) and execution values (Tsimpourlas et al., 2020), likely by developers’ design, to control the complexity of the task (Casalnuovo et al., 2019). Yet none of these relate code and its execution directly, as we do through translating the former into general, intuitively meaningful statements about the latter.
Condition
3 APPROACH
Training and evaluating this approach required a substantial experimental setup: we collect three datasets for three types of evaluations and introduce an improved loss function. This section describes the data collection, evaluation, and modeling setup generally; Appendices A.1 and A.2 provide additional details on our datasets and modeling architecture, respectively. Our benchmark datasets, code, and models are available at http://omitted.link.
3.1 DATASETS
To train BODYGUARD, we generate ca. 2.5 million aligned invariant/function samples from methods with if-statements. We extract these from top-starred Java projects from Github, which we split at the organization level into training (920 projects), held-out (19 projects), and test data (61 projects). Each file was parsed to extract all its methods, from which we generate one sample for each (sideeffect free) if- (or if-else-)statement by removing said guard and storing its condition. This produces an equivalent code fragment in which the statement’s condition is presumed to either be always true (if its body is kept) or false (otherwise). Correspondingly, the omitted condition (or its negation) becomes an invariant on the remaining code. The resultant sample contains the entire method (minus conditional check) as context, with the range of tokens where the invariant condition applies indicated.
We train our model to generate run-time conditions for any indicated segment of code in Java functions. We evaluate its ability to do so in two settings: 1. identifying and repairing missing explicit if-guards, collected from real bug reports, and 2. measuring the validity of our predicted invariants using trace data, collected with Daikon (Ernst et al., 2007). For the first, we collect a dataset of real missing if-condition bugs from across the history of 10K Java projects by parsing all the revisions in these projects’ histories and selecting for changes that a) introduce a single if-statement to guard previously un-guarded code, and b) are described as a bug-fixing change (see Appendix A.1.3 for details). We find ca. three thousand of these. For the second evaluation, we use Daikon to collect execution trace data from a smaller set of eight projects that we manually instrumented. We then compare our predictions to both those generated by Daikon, to measure overlap, and to the collected traces directly, to assess the validity of the invariants that we uniquely generate. This helps us understand the inference gap between static and dynamic information; i.e., is run-time data (when present) strictly more useful than code, or are the two information sources orthogonal?
3.2 MODEL SETUP
Discovering invariants is non-trivial even for experienced developers, so we both equip our models with substantial capacity and training time, and design to prioritize precision over recall. Figure 3 shows an overview of the architecture, inputs and outputs of our model.
3.2.1 ARCHITECTURE
We base our architecture on the Transformer (Vaswani et al., 2017), amplified with the relation attention mechanism from Hellendoorn et al. (2020). While standard (lexical) language models are quite useful for code, Allamanis et al. (2018) and others have shown that utilizing syntatic & semantic information such as the AST, or control/data-flow relations, outperforms text-only models. Hellendoorn et al. (2020) propose a Transformer-based architecture that handles such relations but is faster to train and more powerful than graph neural networks (Allamanis et al., 2018). Their model relies on an added attention bias brij , injected into the query-key comparison of the Transformer’s conventional scaled dot-product attention: eij = (qi+brij)kj >/ √ N . This bias is sensitive to known relations r between tokens i and j (if any, and summed together if more than one), allowing the model to selectively sharpen (or dampen) the significance of each relation. We adopt this model for our work, specifically with 512-dimensional hidden states, 64-dimensional relational embeddings, 8 attention heads, and 8 layers, totaling ca. 67M parameters.
Our model uses relational information in the form of program graphs. A program graph extractor has been released for C# code (Allamanis et al., 2018), but not yet for Java, so we created our own. Specifically, we extract 5 commonly used edge types, all bi-directional, reflecting common lexical, syntactic, and semantic relations in programs (detailed in Appendix A.2.1). We use the same “leaves-only” representation as Hellendoorn et al. (2020) to limit the size of our inputs by not including non-terminal AST nodes, but instead rerouting edges that connect such nodes to representative syntax token (e.g. from an if-statement node to its “if” token in the code). Finally, to ensure that our decoder is aware of the specified range of code tokens where the invariant applies, we also leverage the relational mechanism between the decoder and encoder, using a simple unary relation (i.e., that a token is part of the invariant’s range) between the generated tokens and input tokens.
3.2.2 DECODING LOGICAL STATEMENTS
We synthesize training data using a proxy for invariants, which necessarily introduces some bias towards characteristics of if-conditions (and the code they guard) that is incompatible with true invariants. Most notably, in code, small syntactic differences lead to drastic changes in run-time
behavior. It is common for if-else statements to have quite similar bodies, for which we generate two samples: one with the if-condition as an invariant for the if block, and one with its logical negation for the else block. This approach tends to produce very similar code fragments with very similar, but logically opposite (e.g. ‘!= null’ vs. ‘== null’) conditions.
We supervise our model to encourage its representations for syntactically close but semantically opposite statements to be distinct by introducing a contrastive hinge loss term. For every training sample, we produce the logical negation of the invariant and require the decoder to produce that negation with a much higher entropy than the original. Concretely, given a statement inv comprised of tokens ti and a negating function neg, we use the regular cross-entropy loss LCE :
LCE(inv) = − |inv|∑ i=1 log prob(ti | t1 · · · ti−1, context)
to compute the entropy distance w.r.t. its negation:
∆inv = LCE(neg(inv))− LCE(inv) Lhinge(inv) = max (0,∆inv − )2
in which is the minimum desired entropy “distance” in bits. In this work, we set = 2. For this hinge-loss model, as we will call it in the rest of this paper, we train with a loss equal toLseq+Lhinge.
4 ANALYSIS
We first assess our model’s precision/recall behavior on our automatically collected corpus; then, we apply it to a promising down-stream task: missing if-guard repair (and detection), which further helps us assess the models’ sensitivity to salient invariants. Finally, we use trace data to get a measure of our invariants’ validity and contrast it with an execution-based tool.
4.1 CORPUS DATA
We sample our two models’ held-out performance every 100,000 samples while training,1 leading to the learning curves shown in Figure 4a. The base model saturates earlier than the one employing a contrastive hinge loss, as the latter faces the more challenging task of distinguishing between very similar statements. However, after ca. one week of training, both models converge to approximately the same quality. It speaks to the challenge of the task that the models only reach ∼30% accuracy, due in part to the enormously diverse vocabulary of statements that occurs across our corpus, and to the inherent ambiguity of generating a single invariant when multiple valid options are available (as
1A full epoch is approximately 2.3M samples for the base model and twice that for the hinge-loss models
we will study later). This task clearly stretches our current models of code to their limits, making it a promising new task to pursue in order to improve our models.
We evaluate each model at the step with their highest held-out accuracy on the test data, where we compare the top generated invariant (from beam search, size = 25) to the ground truth. Figure 4b shows the precision/recall behavior of the two models in the high precision range, which is generally much more useful to developers than high recall. We rank predictions by their entropy: an invariant that is highly likely to be sampled from its context is likely correct. Both models respond strongly to this entropy threshold, becoming especially far more precise when entropy values drop below 1.0 (around 40% recall), and converging to (near) perfect precision, at a commensurate expense of recall. Both break 80% precision at nearly 20% recall, which still accounts for tens of thousands of program points across our test projects alone. Going forward, we use the hinge loss model, which has the better precision-recall trade-off, and prioritize precision over recall.
4.2 MISSING IF DETECTION
Using the ∼3K real missing if-guard bugs collected from project histories (see Section 3.1), we first measure our model’s accuracy and precision at predicting this guard from the localized bug in the top row of Table 1. This most directly related to its training signal, where we provided our model with the location of the code guarded by the targeted invariant. Our model achieves a similar overall accuracy here (ca. 29.3%) as on our general test data.2, and precision at 10% recall is also quite high (69.1%), allowing us to fix 215 out of 311 bugs at that level once located. That these tasks appear to be comparably “hard” is relevant; automatically synthesized training data is often overly easy compared to real tasks, which harms generalization (Hellendoorn et al., 2019b).
We also care about our model’s sensitivity to salience: the missing condition in these samples is (arguably) the most important invariant in the entire method, not just the indicated code block. Our model should be able to detect this given how it was trained. This contrasts with tools like Daikon Ernst et al. (2007), which emit all logically valid invariants, many of which irrelevant (Hellendoorn et al., 2019a). The next three rows of Table 1 show the results of running our invariant generator on every contiguous segment (up to 5 blocks) of code in each buggy method, ranking the top invariants across segments for inspection. This is substantially harder than the previous task, reducing the overall accuracy threefold and roughly halving precision. Nevertheless, that is still much better than might be expected if BODYGUARD had no location-sensitivity: we test over 30 blocks per method on average. We also show that the top prediction often matches some aspect of the correct answer, especially the position, and often predicting the correct invariant at another (nearby) block of code.
Finally, we note that the other (low entropy) invariants predicted here are often not at all “incorrect”; from cursory inspection, many are valid, meaningful statements. We study their validity next.
4.3 VALIDITY AND OVERLAP WITH DAIKON
Learning invariants just from code stands in sharp contrast to most current approaches in this field, prominently including Daikon (Ernst et al., 2007), which learns invariants from execution trace data instead. Collecting trace data requires instrumenting projects and access to diverse, representative workloads. This makes it much harder to apply to arbitrary code than our approach but has the benefit of offering stronger guarantees of correctness. Comparing our model with Daikon in projects where this information is available thus allows for two useful evaluations. First, we can lower-bound
2The base model (trained without hinge loss) reached 26.8% accuracy.
our tool’s true-positive rate by determining how often it replicates Daikon’s own invariants, which we tentatively deem “safe” because they hold on all observed traces and have passed a significance test.3 Second, we can use this trace data directly to determine the validity of (a subset of, see Appendix A.3.2) our invariants that do not overlap with Daikon’s.
Figure 5a shows the first result: the frequency with which our invariants overlap with Daikon’s, again plotted against recall, where the points correspond to entropy threshold ranging from 1e-4 to 10. Evidently, pre-conditions are easier to predict for our model, likely because it has no real notion of post-conditions (see Appendix A.3.2). Even so, our tool can retrieve more than two-thirds of Daikon’s invariants at a respectable 10% recall from static code alone, which is quite promising.
We generate 10 invariants per program point using beam search, so even at a low entropy threshold we produce many pre- and post-condition that Daikon does not (those either out of its vocabulary, or with too few observations). It is reasonable to expect many of these to be valid given previous results. Since Daikon does not provide a means of validating a plain-text invariant, we wrote a simple logical engine that parses Daikon’s trace data files and compares a number of categories of our invariants against the recorded values, such as array length, string equivalence, instanceof checks, etc. Using this approach, we are able to validate ca. 40% (12K) of our emitted invariants, resulting in the validities summarized in Figure 5b. In short, our invariants at full recall are valid ca. 60% of the time, and this validity ratio greatly increases as we sharpen the entropy threshold, to over 80%, at recall values under 10%.
Many of these validated invariants were not produced by Daikon, implying that static and dynamic data are orthogonal for this task. We collected the 708 pre-conditions that BODYGUARD generates at an entropy of ≤0.1; of these, 540 could be checked automatically with trace data, yielding 449 valid and 91 invalid cases. We manually inspected the 168 remaining cases and found that most (122) were valid, but Daikon’s tracer simply did not record the information needed to predict these.4 Overall, this suggests that more than 80% of our invariants at this recall level (3.5%) are correct, and more than two-thirds of the invalid remainder could be ruled out using trace data, if available, leaving a false positive rate of just 6.5% (46/708) when execution data is available (while also adding about 200 valid invariants to Daikon’s own predictions). This supports our belief that our tool is largely orthogonal to, and usefully synergistic with, dynamic, trace-based invariant generators.
3Though in practice it generates a fair number of spurious statements still. 4Some of these were correct statements but not proper pre-conditions, e.g. invariants about a variable declared at the first line of the function. This is an artifact of our training setup, which has no explicit notion of method-level pre-conditions. We marked these as invalid for this analysis.
5 RELATED WORK
Automatically inferring invariants is usually approached either in constrained settings where some “checker” (e.g. an SMT solver) or ground-truth is available, or under the assumption that we have access to execution traces from realistic workloads. Among the first, Sharma et al. (2013b) find algebraic (polynomial) invariants by solving a system of linear equations with an SMT solver and using counterexamples to create new test inputs. Sharma et al. (2013a) use PAC-learning to learn integer loop invariants on programs with a single loop, trained by contrasting passing and failing test cases. Padhi et al. (2016) learn pre-conditions and loop invariants as boolean combinations of arithmetic conditions (“features"), which they synthesize by generating and testing all features up to a size cutoff. This approach is agnostic to the program structure, as is Pham et al. (2017), who use a fixed set of feature templates over state vectors to learn linear inequalities that classify passing and failing state vectors, requiring both post-conditions and passing and failing tests to be in place. In contrast, our work makes no assumptions about the code other than the availability of a parser. In settings where an SMT solver (or test cases) is available, it could be used to filter invalid invariants generated by BODYGUARD.
Among machine learning based approaches, Si et al. (2018) use policy-learning to teach a GNN to generate loop invariants in cooperation with an SMT solver (Z3), which provides intermediate rewards (through counterexamples) to finesse the sparsity of the eventual reward (the final validity of the invariant). A second reward is added to reject “meaningless" and “trivial" predicates such e == e or e < e. Besides not requiring an SMT solver, our approach learns notions like “relevant” and “natural” directly from real code. Relatedly, Brockschmidt et al. (2017) also use GNNs to induce invariants over data structures, using a similar approach of generating invariants (in separation logic) supervised by data produced from test runs. The production is based on hand-engineered features over the data-structure graphs. Both these approaches may be symbiotic with ours where tests or logical constraints are known, although they consider different classes of invariants.
Daikon (Ernst et al., 2007) belongs to the second class of invariant predictors, leveraging execution traces from realistic inputs to infer a large vocabulary of method pre- and post-conditions. This general applicability has led to its frequent as a basis for other tools, often to generate an initial corpus of invariants for tasks such as automated patching (Perkins et al., 2009) and test generation (Artzi et al., 2006; Pacheco & Ernst, 2005). However, truly representative inputs are rare, and using incomplete data risks generating many irrelevant or invalid invariants. Polikarpova et al. (2009) found that the size of the test suite affects the validity of generated invariants on Eiffel programs. Kim & Petersen anecdotally note various issues with Daikon’s invariants on large, C++ systems, such as a high degree of false positives and few insightful invariants. Hellendoorn et al. (2019a) similarly observe (on hand-annotated C# functions) few relevant and valid invariants based on executions from unit test. Our approach learns directly from natural conditions to generate relevant and generalizable conditions, and when trace data is present, it can be used to filter out invalid invariants.
6 CONCLUSION
We conjectured that typically used invariants are in a sense natural, like many other aspects of programs (Hindle et al., 2012; Barr et al., 2013; Tsimpourlas et al., 2020), and therefore predictable, intentionally written in standardized ways for ease of reading and writing Casalnuovo et al. (2019). Our results support this claim: both explicit (if-statements) and implicit (invariants) conditions pertaining to code can be predicted precisely, and with high validity from code reading alone, facilitated by our proposed data generation approach and loss function. As a result, we can generate many invariants that were previously only accessible through trace data (and more), which greatly increases the reach and applicability of invariant inference.
This finding has broad implications: our tool can provide valuable semantic insights both to developers, e.g. to aide debugging efforts or facilitate code understanding, and to other tools, many of which struggle to navigate an exponentially large search space of programs. Our tool can help bias this search space using highly likely assertions, which could greatly improve the range and quality of solutions found by downstream applications. In summary, our novel approach learns to reason about program state by synthesizing training data from if-conditions; this empowers BODYGUARD to reliably generate useful invariants entirely from static code.
A APPENDIX
A.1 DATA COLLECTION DETAILS
We base our evaluation on a Java dataset consisting of the top 10,000 most-starred Java projects on Github, collected March 30th, 2020 using the Github v3 API. Since generating our training data samples is quite expensive, we used just the top 1,000 (most starred) of these projects to automatically generate training and evaluation samples for the results described in Section 4.1. This dataset was split between training, held-out and evaluation sets at the organization level to ensure minimal duplication, as projects within the same organization often share many coding patterns (Allamanis, 2019). We allocated 95% of organizations (920 projects) to training data, 2% to held-out data (19 projects), and 3% to test data (61 projects), to assess the final trained models.
A.1.1 INVARIANT GENERATION
We parse each file using Eclipse’s JDT parser and extract all (non-nested) methods from the resulting parse tree. Within each method, we detect all if-statements, removing all those whose conditions contain side-effects (such as assignments, increment/decrement operators, and non-whitelisted methods, see Appendix A.1.2), and those whose body contains a control-flow altering statement (e.g. return, throw) unless it is the sole statement.5 For the remainder, we generate samples based on the following types of if-statements:
Simple if-statements: these include samples like Figure 1, in which a single if-statement guards a simple body with no control-flow altering code.
If-else statements: for these we generate two samples: one in which we remove the else block entirely and generate an if-invariant as above, and one in which we negate the condition and generate an invariant for just the else block. Note that else if statements in Java are treated as nested statements and thus handled the same way.
Control-flow altering if-statements: any if-statement whose body prevents the execution of subsequent code, by containing just a return, break, continue, or throw (Exception) statement, is treated as declaring an invariant (namely, the negation of the if-condition) for the ensuing code.
In all cases, the surrounding context is the entire method, and the range of tokens to which the condition applies (namely, those that used to be guarded) is stored with the sample. We generate samples for all these conditions, producing a new sample for every if-statement. This ensures that each sample minimally alters the original code, which reduces the risk that we produce unnatural code (which would harm the generalization of our model). As such, a method can produce many samples, so functions with many conditions will be represented proportionally more often. We do not consider that problematic, as 1. long functions tend to have correspondingly more invariants, so the increased emphasis should be beneficial to our model, and 2. we anyways cap our training samples to only modestly large functions (up to 500 sub-tokens, which typically translates to the order of 20 lines), due to memory constraints.
A.1.2 PRODUCING NATURAL FUNCTIONS
Not all if-guards can be removed without changing the semantics of the code; conditions can have side-effects. This includes assignments (e.g. if ((x = y) != null)), certain operators (viz. ++ and --) and method calls with side-effects. To ensure that the converted code is semantically coherent, and because invariants should not have side effects anyways, we omit all such cases. Many method calls do not have side effects, so to avoid limiting our dataset too much, we heuristically select a large, but relatively “safe” set of these based on common coding patterns. This includes common “getter” methods, java.lang.Math calls, object equality tests, collection inspection methods, such as inclusion checks (e.g. ‘contains’, ‘has’) and size-related methods, and a few miscellaneous others that were common in our training data (e.g. parseInt, name). The regexes used to detect these various types of methods are listed in Table 2.
Removing if-statements does not always yield meaningful code, consider: int foo(int x, int y) {
if (x > y) { return x; } return y;
}
5When an if-statement body terminates the current branch of execution only after first executing some other code, generating equivalent unguarded code is complicated: inlining the guarded code (minus the final statement) would often produce very unnatural code, as it tends to involve some form of error-recovery, such as logging or resetting a value. Omitting the entire block instead, as we do for simple control-flow altering statements may be more appropriate; future work can explore this, and various other, corner cases to generate more samples.
If we remove the conditional check, the resulting method is left with just two consecutive return statements, which is invalid in Java. This particular case would trigger a compiler error, but not all inappropriate removals do: if the if-body had instead assigned y = x + 1;, removing it would result in y always being assigned x + 1 before returning, making the parameter useless. Not using a parameter is not erroneous by definition, since the method foo may be inherited (or overriden in a subclass) and other instantiations do make use of it, so Eclipse’s parser just emits a warning. Since both these cases result in code that is both unrepresentative of typical Java, and would yield highly predictable invariants, we additionally reparse each resulting function after removal of the targeted if-statement and discard any changes that trigger compilation warnings and errors.
Specifically, Eclipse JDT requires full type resolution to guarantee correct program analysis and stops checking for violations if it finds compile-time errors from missing types. When processing as many projects as we do (many of which cannot be built automatically), we cannot soundly resolve all dependencies for each project. As a close approximation, we instead parse each function in its entire project context to allow as much heuristic type resolution as possible. Then, we look for any increase in warnings and errors between the method before and after removing an if-statement. This reduces the number of collected samples and increases the time to generate the dataset (to ca. 200 CPU hours for 1K projects), but also increases its validity by eliminating many inappropriate fragments.
Finally, we limit our functions to those having 500 (sub-)tokens or less to facilitate a reasonable modeling throughput. This does not reduce the dataset by much; most functions tend to fit this limit. In total, we collect ca. 2.34M training samples, 12.1K held-out samples and 101K test samples, with approximately 200 sub-tokens per function on average.
A.1.3 COLLECTING “MISSING IF” BUGS
We collect our dataset of missing if-condition bugs from across the history of all the aforementioned 10K projects in our dataset. For each project, we parsed every commit to the main branch, using git’s “diff” function to identify cases in which the sole addition was to wrap one or more existing statements in an if-statement. This yielded 32,471 samples from across 8,174,552 commits. Although all of these may constitute interesting samples, we prioritize bug-detection for now as the most direct application of our model. To ensure that our collected samples are likely bug-related, we focus only on the ca. 3.7K cases in which the entire commit introduced just a single if-statement in a single Java file and the corresponding commit message contained any of the common bug-related terms such as “fix”, “bug”, and “fault” (Ray et al., 2016). We additionally filtered out any commits to projects that were included in our training dataset to avoid the risk of overlap (which need not be present as many commits reflect now out-dated code), yielding 3,146 samples in total.
Project Methods Invariants
A.1.4 RUNNING DAIKON
Comparing our tool to Daikon (Ernst et al., 2007) required some adaptations. Daikon requires projects that are fully built, instrumentable, and have representative workloads. Unit tests are often insufficient because they test for both appropriate and inappropriate values (e.g. those triggering an exception), which is counter to our purpose.6 Scaling Daikon to our aforementioned dataset is not feasible; indeed, to the best of our knowledge there is no large public dataset of Daikon invariants on real programs. Instead, we created a modestly large dataset of our own.
To do so, we leveraged the Dacapo benchmark (Blackburn et al., 2006). Originally created to benchmark program optimizations (e.g. through better compilers), each project in this benchmark comes with a set of representative workloads designed to execute many of its paths. This is ideal for our case. Practically, although the benchmark comes with a single runner for each project, Daikon could not instrument through the reflective calls that this framework uses. Instead, we manually instrumented and ran 8 projects (details in Table 3) in this suite directly, which, in nearly all cases, involved writing our own “runner” to mimic Dacapo’s instrumentation while calling the requisite project-code directly. We then applied Daikon as usual, running the code under instrumentation first and then producing invariants from the resulting traces. Table 3 summarizes the resulting invariant counts.
We limited the volume of the collected trace data by exponentially decreasing the number of traces for each program point once it was seen sufficiently often (10 times) and excluding many values from tracing, such as those that are not visible from the program point of interest and any nested values with more than three levels. Even then, Daikon required upwards of 30GB of RAM and nearly an hour of processing for the larger projects – much more than our models.
A.2 MODELING DETAILS
A.2.1 PROGRAM GRAPH EXTRACTION FOR JAVA
We used Eclipse’s JDT parser with approximate name-binding resolution to extract five edge types across 3 broad categories of information that are accessible in source code:
• Lexical: every token is connected to its neighbors through next-token edges (and their reverse). This adds additional sensitivity to lexically local information beyond the positional encoding used in the standard Transformer.
• Syntactic: we extract all AST parent-child relations, which provide insight into the hierarchical structure of source code.
• Data-flow: we include three types of data-flow edges: next-use edges, which connect lexically sequential uses of the same variable; computed-from edges, which connect any variable usage to the last value it was assigned, and def-use edges, which connect every variable usage to its (single) original declaration point.
In addition, every edge type has a symmetric, mirrored version (e.g. prev-token), yielding a total of 10 distinct edge kinds used by our model.
6In addition, Daikon cannot instrument JUnit-tested code since it uses reflection, which effectively makes Java tests off-limit.
A.2.2 TRAINING DETAILS
Consistent with recent observations regarding effective modeling of source code vocabulary (Hellendoorn & Devanbu, 2017; Karampatsis et al., 2020), we use Byte-Pair Encoding to create a sub-token vocabulary based on the tokens in our training data. Our vocabulary, estimated from the training data, spans 10,000 sub-tokens; both the input function and the predicted invariant are sub-tokenized using this (reversible) dictionary. Transformer models generally scale in memory needs with the square of the size of their inputs. To ensure that our minibatches are sufficiently large to keep the gradients stable, we restrict our inputs to functions with up to 500 (BPE) tokens and our invariants to 50 tokens (although invariants that long are very rare). With these cut-offs, we train batches of up to 12,500 tokens in parallel across two NVidia RTX Titan GPU’s with 24GB of VRAM each. By packing similarly sized functions per batch, we minimize the overhead from padding and are able to fit ca. 70 functions per batch on average.
A.3 EVALUATION DETAILS
A.3.1 IF-CONDITION LOCALIZATION & REPAIR METRICS
Since some methods have far more program blocks than others, simply ranking all invariants across method boundaries by entropy would lead to bigger methods being highly disproportionally represented. Rather, we try to balance method and invariant level inspection cost by simulating the inspection of 10% of invariants in our dataset from a subset of methods. We do so by first ranking methods by the entropy of their top invariant, from low to high, and then inspecting all invariants from these methods in order until we have inspected 10% of all location/invariant pairs in this dataset (which number 73,738). The 10% inspection (recall) level in Table 1 correspond to a threshold of just 0.0233 bits, under which the average method has 55.3 blocks – substantially more than the average method overall. Separating out the functions with 32 or fewer program points (the mean), the overall accuracy increases to 16.3% and the 10% recall precision increases to 50.0% – the joint task is naturally easier on shorter methods.
A.3.2 GENERATING PRE- AND POST-CONDITIONS WITH BODYGUARD
The comparison with Daikon invariants comes with an important caveat: Daikon only generates method pre- and post-conditions. This means that we cannot perfectly classify the validity of all our invariants. Nevertheless, our experiments on missing conditions show that our models are precise at inferring even very specific missing conditions, which strongly suggests (as our manual analysis has too) that many of its other suggestions are valid as well.
Secondly, our tool produces invariants for any syntactic block of code throughout the method and does not have a general mechanism to indicate that pre- or post-conditions are required. To imitate these for our tool, the closest approximation is to mark the entire method body as needing an invariant when a pre-condition is required and the final (return) statement otherwise. To avoid the complexity of having to match multiple return points, or none at all for void methods, we restrict the latter case to methods with a single return statement only. Note that the latter is an imperfect approximation: our tool only learns to predict guards that precede a statement. A guard that it predicts for a return statement may not be an appropriate substitution for true post-conditions but rather a reason to return at that particular point.
A.3.3 MEASURING OVERLAP WITH DAIKON’S INVARIANTS
We quantify the overlap between our predicted invariants and Daikon’s using normalized Cumulative Gain. This metric captures the quality of a ranker in terms of how often it returns relevant elements; it is traditionally used in information retrieval, for example to evaluate a web searcher. Although discounted cumulative gain is more commonly used, we refrain from penalizing based on “rank” of predictions, because there is no reason to assume that Daikon’s invariants are more salient or relevant than others that we predict. That is, all that matters is that Daikon’s invariants are among our (top 10) predictions.
A.4 FURTHER RESULTS
A.4.1 CHARACTERISTICS OF MANUALLY INSPECTED INVARIANTS
A large portion of the manually verified invariants in Section 4.3 corresponded to fairly trivial statements, such as instanceof assertions for a value being cast to the corresponding type. In some cases, our invariants were more general or accurate than Daikon’s; e.g. when BODYGUARD asserts that an object is not null whereas Daikon asserts that a member of that object is not null. At other times, we inferred invariants that Daikon missed entirely, likely due to limitations in its internal rules and heuristics. For instance, as a pre-condition of:
static ReliableFile getReliableFile(File file) throws IOException { if (file.isDirectory()) {
throw new FileNotFoundException(""); } return new ReliableFile(file);
}
BODYGUARD correctly inferred that !file.isDirectory(), while Daikon only offered file != null.
In another case, our tool produced a more specific invariant for this PMD snippet:
public int getPriority() { return priority; }
Here, Daikon asserts that priority level is exactly either 2 or 3, because those are the only observed values in the (evidently unrepresentative) traces off this method. This indicates how Daikon’s invariants can be inaccurate even with available workloads. BODYGUARD more broadly anticipates that priority >= 0, which matches the method’s actual specification as encoded in its Javadoc documentation (which our tool does not use).
A.4.2 FURTHER EXAMPLES
In the below example,7 a badge variable, initialized to null, is first assigned a value based on program state, and then added to two collections (local and, conditionally, global). This second segment, after the switch statement, should have been guarded by a check that badge != null, since not every case assigns it a value. Across all 53 permutations of code blocks (and countless options per block) in this method, BODYGUARD predicts this condition at the correct location at rank 3. Its first prediction was the nonsensical statement !global as a guard for the entire method body. Possibly, no good prediction was possible for that range, so this option had low entropy by sheer contrast with other possibilities. The second ranked prediction was badge == null for every line after the declaration of badge. While this is tautologically valid as a pre-condition for those lines, it highlights the importance of specificity in range – it is only truly invariant for some of these lines, specifically, the start of each case and the break statement of the latter two, a range that is not currently supported by our approach.
public static void validateTutorial() { Badge badge = null; switch (Dungeon.hero.heroClass) {
case WARRIOR: badge = Badge.TUTORIAL_WARRIOR; break; case MAGE: badge = Badge.TUTORIAL_MAGE; break; case ROGUE: break; case HUNTRESS: break;
7Repaired in https://github.com/00-Evan/shattered-pixel-dungeon/commit/ 475d78cd0599a1d39c4708a91fbb30c95b3f3418
} local.add(badge); if (!global.contains(badge)) {
global.add(badge); saveNeeded = true;
} }
The following snippet8 returns a default image, generating it on the first call. Even though the documentation of createBitMap(int, int, Bitmap.Config)9 does not specify it, this method can return null in rare circumstances, such as when a phone runs out of memory and recovers by aborting this call.10 BODYGUARD correctly infers empty != null as the top invariant, having seen similar calls in other Android projects in its training data. Specifically, it predicts this invariant both for the just the line containing empty.eraseColor (rank 1), and for the block including that and the next line (rank 2). The latter is the more correct segment.
private static Bitmap getDefaultThumbnail() { if (defaultImage == null) {
Bitmap empty = Bitmap.createBitmap(160, 200, Bitmap.Config.ARGB_8888); empty.eraseColor(Color.WHITE); defaultImage = paint(empty);
} return defaultImage;
}
A.5 LIMITATIONS
We evaluated our predictions broadly to assess both their salience and validity. Even so, it is hard to automatically assess all of our invariants, especially those inserted in the middle of methods and those whose vocabulary is outside of what Daikon finds. However, the results on the task of predicting missing if-statements, (which avoids these evaluation problems) are quite encouraging; we believe that this bodes well for the more general settings. Future work may better assess validity of our entire vocabulary of invariants, perhaps by injecting asserts corresponding to our predictions into the source code and executing the tests.
Our second main criterion is salience: our predictions should be particularly relevant to the referenced code, in contrast to prior work. We chose to assess this by using real missing if guards, which would appear to be a good example of particularly salient implicit conditions (as developers chose to make them explicit). We did not quantitatively study other types of salience, such as which conditions are most informative or intuitively obvious to real developers. This, too, may be a fruitful area for future work; human subject studies involving invariants have produced worthwhile insights into developer behavior in the past (Staats et al., 2012).
8Repaired in https://github.com/SufficientlySecure/document-viewer/commit/ 680650556340aa15502e1ec375e4255c1c16fb5b
9https://developer.android.com/reference/android/graphics/Bitmap# createBitmap(int,int,android.graphics.Bitmap.Config)
10As suggested at https://stackoverflow.com/a/14778533. | 1. What is the novel idea proposed by the paper regarding discovering likely invariants for code?
2. What are the strengths and weaknesses of the proposed technique, particularly in its application and effectiveness in practice?
3. How does the reviewer assess the significance and usefulness of the technique in comparison to other related works?
4. What are some potential applications or scenarios where the technique could be beneficial, such as bug finding?
5. Are there any concerns or limitations regarding the experimental results and their interpretation? | Review | Review
The paper proposes to discover likely invariants for code by observing snippets that check for the given conditions and assuming these conditions encode invariants for the code executing before and after the condition check was checked to hold (respectively not hold for negated invariant). This is a novel idea that uses code with correct if conditions to guess the invariants for code that has the conditions missing.
My main criticism for the paper is that it does not give a compelling reason why one would want to apply this technique. While this is a smart way to obtain the invariants, the paper does not give too much intuition why it could be useful in practice. Even on the examples in the paper, the machine learning algorithm probably learns invariants from identifier names and not from the semantics of the code around.
The authors can relate the work to a large corpus of learning invariants for functions based on things like usages of functions, e.g. like done in [1] or [2] and the techniques there find actual bugs in code. For example, if the invariant is non-nullness of x, this may be because x comes from a function that sometimes returns null or because it comes from a function that does not accept null. If I would want to do for example bugfinding, I would want to know contradicting invariants coming from the two functions.
In terms of execution, the paper is well written and the techniques look state-of-the-art from a machine learning perspective (although there are no baselines given). However, the experiments are insufficient for showing usefulness of the idea. With Daikon overlap in the 70% range and precision also in the same range, it is not clear that the tool gives any new valid invariants on top of Daikon. In terms of bugfinding, the results are also inconclusive that any bugs can be found. If I would put the tool to test 100 methods, where normally less than 10 of them are buggy, I can expect 20 false positives.
Minor: theorem proofers -> theorem provers Figure 5 a talks about overlap, but axis says precision.
[1] Ted Kremenek, Paul Twohey, Godmar Back, Andrew Y. Ng, Dawson R. Engler: From Uncertainty to Belief: Inferring the Specification Within. OSDI 2006 [2] Insu Yun, Changwoo Min, Xujie Si, Yeongjin Jang, Taesoo Kim, Mayur Naik: APISan: Sanitizing API Usages through Semantic Cross-Checking |
ICLR | Title
Learning to Infer Run-Time Invariants from Source code
Abstract
Source code is notably different from natural language in that it is meant to be executed. Experienced developers infer complex “invariants" about run-time state while reading code, which helps them to constrain and predict program behavior. Knowing these invariants can be helpful; yet developers rarely encode these explicitly, so machine-learning methods don’t have much aligned data to learn from. We propose an approach that adapts cues within existing if-statements regarding explicit run-time expectations to generate aligned datasets of code and implicit invariants. We also propose a contrastive loss to inhibit generation of illogical invariants. Our model learns to infer a wide vocabulary of invariants for arbitrary code, which can be used to detect and repair real bugs. This is entirely complementary to established approaches, which either use logical engines that scale poorly, or run-time traces that are expensive to obtain; when present, that data can complement our tool, as we demonstrate in conjunction with Daikon, an existing tool. Our results show that neural models can derive useful representations of run-time behavior directly from source code.
1 INTRODUCTION
Software maintenance requires reading a lot of code. Experienced developers are adept at this, garnering rich semantics just from this “static” (viz, without running the code) inspection to find complex bugs, predict a function’s outputs from its inputs, and learn new coding patterns. They strongly rely on generic assumptions about the program’s run-time behavior; e.g., that a list index never escapes the list bounds and strictly increases. Such “invariants” capture general, yet relevant constraints on the program’s expected run-time behavior.
Automatically inferring invariants can help both developers and tools: first, they can be used to detect bugs where explicit assumptions are incorrect or implicit ones ought to be explicit; second, invariants can guide myriad other tools, such as test-case generators (Artzi et al., 2006). However, inferring invariants is not tractable in general and sound approximations don’t scale beyond very small programs. Instead, popular tools either use dynamic trace data from real executions (esp. Daikon (Ernst et al., 2007)), which requires costly instrumentation, or focuses on highly constrained cases such as loops (Sharma et al., 2013a; Padhi et al., 2016).
Yet this scalability obstacle may be largely artificial. Practical programs rarely take on an exponential range of values (e.g., integers tend to come in a bounded range), and developers seem able to make such inferences without undertaking a project-scale analysis. Rather, they reliably extract them from a local context, using their past experience and cues from the code itself. Consider the snippet in Figure 1: the program on the right uses a time variable, returned from one method and passed to another. Not only is ‘time’ generally non-negative, in this particular case we should not update a position (using moments dx, dy) if no time has passed either. This inference, and many more, can quickly be made from reading just these lines of code. Other times, such implicit inferences should be made explicit: this snippet was later repaired by adding the guard on the left.
Based on this observed symmetry between explicitly guarded code and implicit run-time assumptions about code, we propose a model that learns invariants directly from static code. As developers rarely “assert” invariants in their code, we train this model using a proxy, by automatically converting explicitly guarded code to its implicitly guarded counterpart across millions of functions. The generated programs are constrained to be similar to real functions and used to train a large model with a new loss function that is aware of logical constraints.
Our model, BODYGUARD predicts a rich vocabulary of conditions about arbitrary code from new projects, and can be used to find & fix real missing-guard bugs, such as the one in Figure 1, with over 69% (repair) precision at 10% inspection cost. It also predicts more than two-thirds of Daikon’s invariants that could previously only be inferred with run-time data, and some entirely new ones that can be validated automatically with trace data. Our work presents a significant next step in learned static analysis, being the first to reliably produce natural invariants from arbitrary code alone. More broadly, we show that learned models can implicitly represent behavioral semantics, just from code.
2 OVERVIEW
Inferring invariants for arbitrary programs is NP-hard. Sound approaches using theorem proofers are therefore constrained to restricted settings, such as simple loops (Sharma et al., 2013a), or ones with known inputs (Pham et al., 2017). Such approaches generally don’t scale: needing SMT solvers limits tools to the few program points where invariants can be proven, and ground-truth inputs typically need to be constructed by hand. An alternative is to use execution traces (Ernst et al., 2007): when realistic workloads are available (e.g. from test suites), they generally span entire systems. However, genuinely representative workloads are rare, so trace-based tools often generate poor invariants (Kim & Petersen). A key concern is that none of these have a notion of relevance, or naturalness of the actual statements (Hellendoorn et al., 2019a).
To address these gaps, we propose a learned invariant generator that predicts directly from code, trained with realistic examples. Our central claim is that the natural distribution of programs includes many groups of similar functions, some of which assert run-time assumptions explicitly, and with much detail, while others vary along these dimensions. As Figure 1 highlights, it is common for code not to state salient conditions (time > 0, on the right) that developers may naturally intuit, while other times (e.g. in a later revision, on the left), such conditions are explicitly checked. If this distributional assumption holds in general, then we can use explicit conditional checks that guard blocks in functions to teach our models about the implicit invariants of unguarded blocks in similar functions. Furthermore, we conjecture that in such comparable samples, the condition is both salient (since it is checked explicitly) and natural (since it is written by humans). Learning from such examples is thus a very appropriate training signal for inferring practically useful invariants.
Figure 2 illustrates our data generation: we find explicitly guarded blocks in functions that can be removed without substantially perverting the program, and convert these checked cases to implicit ones (Section 3.1). We garner a large aligned dataset to learn to predict the reverse of this mapping, training a Transformer-style model for code, augmented with a loss that encourages sampling logical conditions (Section 3.2). This model, nick-named BODYGUARD, works on any (Java) function, quickly adapting to the local vocabulary and semantics, and has a natural inclination to generate realistic, salient invariants that are often valid (Section 4). This result fits in a long line of observations that programming is remarkably predictable, including in its syntax (Hindle et al., 2012) and execution values (Tsimpourlas et al., 2020), likely by developers’ design, to control the complexity of the task (Casalnuovo et al., 2019). Yet none of these relate code and its execution directly, as we do through translating the former into general, intuitively meaningful statements about the latter.
Condition
3 APPROACH
Training and evaluating this approach required a substantial experimental setup: we collect three datasets for three types of evaluations and introduce an improved loss function. This section describes the data collection, evaluation, and modeling setup generally; Appendices A.1 and A.2 provide additional details on our datasets and modeling architecture, respectively. Our benchmark datasets, code, and models are available at http://omitted.link.
3.1 DATASETS
To train BODYGUARD, we generate ca. 2.5 million aligned invariant/function samples from methods with if-statements. We extract these from top-starred Java projects from Github, which we split at the organization level into training (920 projects), held-out (19 projects), and test data (61 projects). Each file was parsed to extract all its methods, from which we generate one sample for each (sideeffect free) if- (or if-else-)statement by removing said guard and storing its condition. This produces an equivalent code fragment in which the statement’s condition is presumed to either be always true (if its body is kept) or false (otherwise). Correspondingly, the omitted condition (or its negation) becomes an invariant on the remaining code. The resultant sample contains the entire method (minus conditional check) as context, with the range of tokens where the invariant condition applies indicated.
We train our model to generate run-time conditions for any indicated segment of code in Java functions. We evaluate its ability to do so in two settings: 1. identifying and repairing missing explicit if-guards, collected from real bug reports, and 2. measuring the validity of our predicted invariants using trace data, collected with Daikon (Ernst et al., 2007). For the first, we collect a dataset of real missing if-condition bugs from across the history of 10K Java projects by parsing all the revisions in these projects’ histories and selecting for changes that a) introduce a single if-statement to guard previously un-guarded code, and b) are described as a bug-fixing change (see Appendix A.1.3 for details). We find ca. three thousand of these. For the second evaluation, we use Daikon to collect execution trace data from a smaller set of eight projects that we manually instrumented. We then compare our predictions to both those generated by Daikon, to measure overlap, and to the collected traces directly, to assess the validity of the invariants that we uniquely generate. This helps us understand the inference gap between static and dynamic information; i.e., is run-time data (when present) strictly more useful than code, or are the two information sources orthogonal?
3.2 MODEL SETUP
Discovering invariants is non-trivial even for experienced developers, so we both equip our models with substantial capacity and training time, and design to prioritize precision over recall. Figure 3 shows an overview of the architecture, inputs and outputs of our model.
3.2.1 ARCHITECTURE
We base our architecture on the Transformer (Vaswani et al., 2017), amplified with the relation attention mechanism from Hellendoorn et al. (2020). While standard (lexical) language models are quite useful for code, Allamanis et al. (2018) and others have shown that utilizing syntatic & semantic information such as the AST, or control/data-flow relations, outperforms text-only models. Hellendoorn et al. (2020) propose a Transformer-based architecture that handles such relations but is faster to train and more powerful than graph neural networks (Allamanis et al., 2018). Their model relies on an added attention bias brij , injected into the query-key comparison of the Transformer’s conventional scaled dot-product attention: eij = (qi+brij)kj >/ √ N . This bias is sensitive to known relations r between tokens i and j (if any, and summed together if more than one), allowing the model to selectively sharpen (or dampen) the significance of each relation. We adopt this model for our work, specifically with 512-dimensional hidden states, 64-dimensional relational embeddings, 8 attention heads, and 8 layers, totaling ca. 67M parameters.
Our model uses relational information in the form of program graphs. A program graph extractor has been released for C# code (Allamanis et al., 2018), but not yet for Java, so we created our own. Specifically, we extract 5 commonly used edge types, all bi-directional, reflecting common lexical, syntactic, and semantic relations in programs (detailed in Appendix A.2.1). We use the same “leaves-only” representation as Hellendoorn et al. (2020) to limit the size of our inputs by not including non-terminal AST nodes, but instead rerouting edges that connect such nodes to representative syntax token (e.g. from an if-statement node to its “if” token in the code). Finally, to ensure that our decoder is aware of the specified range of code tokens where the invariant applies, we also leverage the relational mechanism between the decoder and encoder, using a simple unary relation (i.e., that a token is part of the invariant’s range) between the generated tokens and input tokens.
3.2.2 DECODING LOGICAL STATEMENTS
We synthesize training data using a proxy for invariants, which necessarily introduces some bias towards characteristics of if-conditions (and the code they guard) that is incompatible with true invariants. Most notably, in code, small syntactic differences lead to drastic changes in run-time
behavior. It is common for if-else statements to have quite similar bodies, for which we generate two samples: one with the if-condition as an invariant for the if block, and one with its logical negation for the else block. This approach tends to produce very similar code fragments with very similar, but logically opposite (e.g. ‘!= null’ vs. ‘== null’) conditions.
We supervise our model to encourage its representations for syntactically close but semantically opposite statements to be distinct by introducing a contrastive hinge loss term. For every training sample, we produce the logical negation of the invariant and require the decoder to produce that negation with a much higher entropy than the original. Concretely, given a statement inv comprised of tokens ti and a negating function neg, we use the regular cross-entropy loss LCE :
LCE(inv) = − |inv|∑ i=1 log prob(ti | t1 · · · ti−1, context)
to compute the entropy distance w.r.t. its negation:
∆inv = LCE(neg(inv))− LCE(inv) Lhinge(inv) = max (0,∆inv − )2
in which is the minimum desired entropy “distance” in bits. In this work, we set = 2. For this hinge-loss model, as we will call it in the rest of this paper, we train with a loss equal toLseq+Lhinge.
4 ANALYSIS
We first assess our model’s precision/recall behavior on our automatically collected corpus; then, we apply it to a promising down-stream task: missing if-guard repair (and detection), which further helps us assess the models’ sensitivity to salient invariants. Finally, we use trace data to get a measure of our invariants’ validity and contrast it with an execution-based tool.
4.1 CORPUS DATA
We sample our two models’ held-out performance every 100,000 samples while training,1 leading to the learning curves shown in Figure 4a. The base model saturates earlier than the one employing a contrastive hinge loss, as the latter faces the more challenging task of distinguishing between very similar statements. However, after ca. one week of training, both models converge to approximately the same quality. It speaks to the challenge of the task that the models only reach ∼30% accuracy, due in part to the enormously diverse vocabulary of statements that occurs across our corpus, and to the inherent ambiguity of generating a single invariant when multiple valid options are available (as
1A full epoch is approximately 2.3M samples for the base model and twice that for the hinge-loss models
we will study later). This task clearly stretches our current models of code to their limits, making it a promising new task to pursue in order to improve our models.
We evaluate each model at the step with their highest held-out accuracy on the test data, where we compare the top generated invariant (from beam search, size = 25) to the ground truth. Figure 4b shows the precision/recall behavior of the two models in the high precision range, which is generally much more useful to developers than high recall. We rank predictions by their entropy: an invariant that is highly likely to be sampled from its context is likely correct. Both models respond strongly to this entropy threshold, becoming especially far more precise when entropy values drop below 1.0 (around 40% recall), and converging to (near) perfect precision, at a commensurate expense of recall. Both break 80% precision at nearly 20% recall, which still accounts for tens of thousands of program points across our test projects alone. Going forward, we use the hinge loss model, which has the better precision-recall trade-off, and prioritize precision over recall.
4.2 MISSING IF DETECTION
Using the ∼3K real missing if-guard bugs collected from project histories (see Section 3.1), we first measure our model’s accuracy and precision at predicting this guard from the localized bug in the top row of Table 1. This most directly related to its training signal, where we provided our model with the location of the code guarded by the targeted invariant. Our model achieves a similar overall accuracy here (ca. 29.3%) as on our general test data.2, and precision at 10% recall is also quite high (69.1%), allowing us to fix 215 out of 311 bugs at that level once located. That these tasks appear to be comparably “hard” is relevant; automatically synthesized training data is often overly easy compared to real tasks, which harms generalization (Hellendoorn et al., 2019b).
We also care about our model’s sensitivity to salience: the missing condition in these samples is (arguably) the most important invariant in the entire method, not just the indicated code block. Our model should be able to detect this given how it was trained. This contrasts with tools like Daikon Ernst et al. (2007), which emit all logically valid invariants, many of which irrelevant (Hellendoorn et al., 2019a). The next three rows of Table 1 show the results of running our invariant generator on every contiguous segment (up to 5 blocks) of code in each buggy method, ranking the top invariants across segments for inspection. This is substantially harder than the previous task, reducing the overall accuracy threefold and roughly halving precision. Nevertheless, that is still much better than might be expected if BODYGUARD had no location-sensitivity: we test over 30 blocks per method on average. We also show that the top prediction often matches some aspect of the correct answer, especially the position, and often predicting the correct invariant at another (nearby) block of code.
Finally, we note that the other (low entropy) invariants predicted here are often not at all “incorrect”; from cursory inspection, many are valid, meaningful statements. We study their validity next.
4.3 VALIDITY AND OVERLAP WITH DAIKON
Learning invariants just from code stands in sharp contrast to most current approaches in this field, prominently including Daikon (Ernst et al., 2007), which learns invariants from execution trace data instead. Collecting trace data requires instrumenting projects and access to diverse, representative workloads. This makes it much harder to apply to arbitrary code than our approach but has the benefit of offering stronger guarantees of correctness. Comparing our model with Daikon in projects where this information is available thus allows for two useful evaluations. First, we can lower-bound
2The base model (trained without hinge loss) reached 26.8% accuracy.
our tool’s true-positive rate by determining how often it replicates Daikon’s own invariants, which we tentatively deem “safe” because they hold on all observed traces and have passed a significance test.3 Second, we can use this trace data directly to determine the validity of (a subset of, see Appendix A.3.2) our invariants that do not overlap with Daikon’s.
Figure 5a shows the first result: the frequency with which our invariants overlap with Daikon’s, again plotted against recall, where the points correspond to entropy threshold ranging from 1e-4 to 10. Evidently, pre-conditions are easier to predict for our model, likely because it has no real notion of post-conditions (see Appendix A.3.2). Even so, our tool can retrieve more than two-thirds of Daikon’s invariants at a respectable 10% recall from static code alone, which is quite promising.
We generate 10 invariants per program point using beam search, so even at a low entropy threshold we produce many pre- and post-condition that Daikon does not (those either out of its vocabulary, or with too few observations). It is reasonable to expect many of these to be valid given previous results. Since Daikon does not provide a means of validating a plain-text invariant, we wrote a simple logical engine that parses Daikon’s trace data files and compares a number of categories of our invariants against the recorded values, such as array length, string equivalence, instanceof checks, etc. Using this approach, we are able to validate ca. 40% (12K) of our emitted invariants, resulting in the validities summarized in Figure 5b. In short, our invariants at full recall are valid ca. 60% of the time, and this validity ratio greatly increases as we sharpen the entropy threshold, to over 80%, at recall values under 10%.
Many of these validated invariants were not produced by Daikon, implying that static and dynamic data are orthogonal for this task. We collected the 708 pre-conditions that BODYGUARD generates at an entropy of ≤0.1; of these, 540 could be checked automatically with trace data, yielding 449 valid and 91 invalid cases. We manually inspected the 168 remaining cases and found that most (122) were valid, but Daikon’s tracer simply did not record the information needed to predict these.4 Overall, this suggests that more than 80% of our invariants at this recall level (3.5%) are correct, and more than two-thirds of the invalid remainder could be ruled out using trace data, if available, leaving a false positive rate of just 6.5% (46/708) when execution data is available (while also adding about 200 valid invariants to Daikon’s own predictions). This supports our belief that our tool is largely orthogonal to, and usefully synergistic with, dynamic, trace-based invariant generators.
3Though in practice it generates a fair number of spurious statements still. 4Some of these were correct statements but not proper pre-conditions, e.g. invariants about a variable declared at the first line of the function. This is an artifact of our training setup, which has no explicit notion of method-level pre-conditions. We marked these as invalid for this analysis.
5 RELATED WORK
Automatically inferring invariants is usually approached either in constrained settings where some “checker” (e.g. an SMT solver) or ground-truth is available, or under the assumption that we have access to execution traces from realistic workloads. Among the first, Sharma et al. (2013b) find algebraic (polynomial) invariants by solving a system of linear equations with an SMT solver and using counterexamples to create new test inputs. Sharma et al. (2013a) use PAC-learning to learn integer loop invariants on programs with a single loop, trained by contrasting passing and failing test cases. Padhi et al. (2016) learn pre-conditions and loop invariants as boolean combinations of arithmetic conditions (“features"), which they synthesize by generating and testing all features up to a size cutoff. This approach is agnostic to the program structure, as is Pham et al. (2017), who use a fixed set of feature templates over state vectors to learn linear inequalities that classify passing and failing state vectors, requiring both post-conditions and passing and failing tests to be in place. In contrast, our work makes no assumptions about the code other than the availability of a parser. In settings where an SMT solver (or test cases) is available, it could be used to filter invalid invariants generated by BODYGUARD.
Among machine learning based approaches, Si et al. (2018) use policy-learning to teach a GNN to generate loop invariants in cooperation with an SMT solver (Z3), which provides intermediate rewards (through counterexamples) to finesse the sparsity of the eventual reward (the final validity of the invariant). A second reward is added to reject “meaningless" and “trivial" predicates such e == e or e < e. Besides not requiring an SMT solver, our approach learns notions like “relevant” and “natural” directly from real code. Relatedly, Brockschmidt et al. (2017) also use GNNs to induce invariants over data structures, using a similar approach of generating invariants (in separation logic) supervised by data produced from test runs. The production is based on hand-engineered features over the data-structure graphs. Both these approaches may be symbiotic with ours where tests or logical constraints are known, although they consider different classes of invariants.
Daikon (Ernst et al., 2007) belongs to the second class of invariant predictors, leveraging execution traces from realistic inputs to infer a large vocabulary of method pre- and post-conditions. This general applicability has led to its frequent as a basis for other tools, often to generate an initial corpus of invariants for tasks such as automated patching (Perkins et al., 2009) and test generation (Artzi et al., 2006; Pacheco & Ernst, 2005). However, truly representative inputs are rare, and using incomplete data risks generating many irrelevant or invalid invariants. Polikarpova et al. (2009) found that the size of the test suite affects the validity of generated invariants on Eiffel programs. Kim & Petersen anecdotally note various issues with Daikon’s invariants on large, C++ systems, such as a high degree of false positives and few insightful invariants. Hellendoorn et al. (2019a) similarly observe (on hand-annotated C# functions) few relevant and valid invariants based on executions from unit test. Our approach learns directly from natural conditions to generate relevant and generalizable conditions, and when trace data is present, it can be used to filter out invalid invariants.
6 CONCLUSION
We conjectured that typically used invariants are in a sense natural, like many other aspects of programs (Hindle et al., 2012; Barr et al., 2013; Tsimpourlas et al., 2020), and therefore predictable, intentionally written in standardized ways for ease of reading and writing Casalnuovo et al. (2019). Our results support this claim: both explicit (if-statements) and implicit (invariants) conditions pertaining to code can be predicted precisely, and with high validity from code reading alone, facilitated by our proposed data generation approach and loss function. As a result, we can generate many invariants that were previously only accessible through trace data (and more), which greatly increases the reach and applicability of invariant inference.
This finding has broad implications: our tool can provide valuable semantic insights both to developers, e.g. to aide debugging efforts or facilitate code understanding, and to other tools, many of which struggle to navigate an exponentially large search space of programs. Our tool can help bias this search space using highly likely assertions, which could greatly improve the range and quality of solutions found by downstream applications. In summary, our novel approach learns to reason about program state by synthesizing training data from if-conditions; this empowers BODYGUARD to reliably generate useful invariants entirely from static code.
A APPENDIX
A.1 DATA COLLECTION DETAILS
We base our evaluation on a Java dataset consisting of the top 10,000 most-starred Java projects on Github, collected March 30th, 2020 using the Github v3 API. Since generating our training data samples is quite expensive, we used just the top 1,000 (most starred) of these projects to automatically generate training and evaluation samples for the results described in Section 4.1. This dataset was split between training, held-out and evaluation sets at the organization level to ensure minimal duplication, as projects within the same organization often share many coding patterns (Allamanis, 2019). We allocated 95% of organizations (920 projects) to training data, 2% to held-out data (19 projects), and 3% to test data (61 projects), to assess the final trained models.
A.1.1 INVARIANT GENERATION
We parse each file using Eclipse’s JDT parser and extract all (non-nested) methods from the resulting parse tree. Within each method, we detect all if-statements, removing all those whose conditions contain side-effects (such as assignments, increment/decrement operators, and non-whitelisted methods, see Appendix A.1.2), and those whose body contains a control-flow altering statement (e.g. return, throw) unless it is the sole statement.5 For the remainder, we generate samples based on the following types of if-statements:
Simple if-statements: these include samples like Figure 1, in which a single if-statement guards a simple body with no control-flow altering code.
If-else statements: for these we generate two samples: one in which we remove the else block entirely and generate an if-invariant as above, and one in which we negate the condition and generate an invariant for just the else block. Note that else if statements in Java are treated as nested statements and thus handled the same way.
Control-flow altering if-statements: any if-statement whose body prevents the execution of subsequent code, by containing just a return, break, continue, or throw (Exception) statement, is treated as declaring an invariant (namely, the negation of the if-condition) for the ensuing code.
In all cases, the surrounding context is the entire method, and the range of tokens to which the condition applies (namely, those that used to be guarded) is stored with the sample. We generate samples for all these conditions, producing a new sample for every if-statement. This ensures that each sample minimally alters the original code, which reduces the risk that we produce unnatural code (which would harm the generalization of our model). As such, a method can produce many samples, so functions with many conditions will be represented proportionally more often. We do not consider that problematic, as 1. long functions tend to have correspondingly more invariants, so the increased emphasis should be beneficial to our model, and 2. we anyways cap our training samples to only modestly large functions (up to 500 sub-tokens, which typically translates to the order of 20 lines), due to memory constraints.
A.1.2 PRODUCING NATURAL FUNCTIONS
Not all if-guards can be removed without changing the semantics of the code; conditions can have side-effects. This includes assignments (e.g. if ((x = y) != null)), certain operators (viz. ++ and --) and method calls with side-effects. To ensure that the converted code is semantically coherent, and because invariants should not have side effects anyways, we omit all such cases. Many method calls do not have side effects, so to avoid limiting our dataset too much, we heuristically select a large, but relatively “safe” set of these based on common coding patterns. This includes common “getter” methods, java.lang.Math calls, object equality tests, collection inspection methods, such as inclusion checks (e.g. ‘contains’, ‘has’) and size-related methods, and a few miscellaneous others that were common in our training data (e.g. parseInt, name). The regexes used to detect these various types of methods are listed in Table 2.
Removing if-statements does not always yield meaningful code, consider: int foo(int x, int y) {
if (x > y) { return x; } return y;
}
5When an if-statement body terminates the current branch of execution only after first executing some other code, generating equivalent unguarded code is complicated: inlining the guarded code (minus the final statement) would often produce very unnatural code, as it tends to involve some form of error-recovery, such as logging or resetting a value. Omitting the entire block instead, as we do for simple control-flow altering statements may be more appropriate; future work can explore this, and various other, corner cases to generate more samples.
If we remove the conditional check, the resulting method is left with just two consecutive return statements, which is invalid in Java. This particular case would trigger a compiler error, but not all inappropriate removals do: if the if-body had instead assigned y = x + 1;, removing it would result in y always being assigned x + 1 before returning, making the parameter useless. Not using a parameter is not erroneous by definition, since the method foo may be inherited (or overriden in a subclass) and other instantiations do make use of it, so Eclipse’s parser just emits a warning. Since both these cases result in code that is both unrepresentative of typical Java, and would yield highly predictable invariants, we additionally reparse each resulting function after removal of the targeted if-statement and discard any changes that trigger compilation warnings and errors.
Specifically, Eclipse JDT requires full type resolution to guarantee correct program analysis and stops checking for violations if it finds compile-time errors from missing types. When processing as many projects as we do (many of which cannot be built automatically), we cannot soundly resolve all dependencies for each project. As a close approximation, we instead parse each function in its entire project context to allow as much heuristic type resolution as possible. Then, we look for any increase in warnings and errors between the method before and after removing an if-statement. This reduces the number of collected samples and increases the time to generate the dataset (to ca. 200 CPU hours for 1K projects), but also increases its validity by eliminating many inappropriate fragments.
Finally, we limit our functions to those having 500 (sub-)tokens or less to facilitate a reasonable modeling throughput. This does not reduce the dataset by much; most functions tend to fit this limit. In total, we collect ca. 2.34M training samples, 12.1K held-out samples and 101K test samples, with approximately 200 sub-tokens per function on average.
A.1.3 COLLECTING “MISSING IF” BUGS
We collect our dataset of missing if-condition bugs from across the history of all the aforementioned 10K projects in our dataset. For each project, we parsed every commit to the main branch, using git’s “diff” function to identify cases in which the sole addition was to wrap one or more existing statements in an if-statement. This yielded 32,471 samples from across 8,174,552 commits. Although all of these may constitute interesting samples, we prioritize bug-detection for now as the most direct application of our model. To ensure that our collected samples are likely bug-related, we focus only on the ca. 3.7K cases in which the entire commit introduced just a single if-statement in a single Java file and the corresponding commit message contained any of the common bug-related terms such as “fix”, “bug”, and “fault” (Ray et al., 2016). We additionally filtered out any commits to projects that were included in our training dataset to avoid the risk of overlap (which need not be present as many commits reflect now out-dated code), yielding 3,146 samples in total.
Project Methods Invariants
A.1.4 RUNNING DAIKON
Comparing our tool to Daikon (Ernst et al., 2007) required some adaptations. Daikon requires projects that are fully built, instrumentable, and have representative workloads. Unit tests are often insufficient because they test for both appropriate and inappropriate values (e.g. those triggering an exception), which is counter to our purpose.6 Scaling Daikon to our aforementioned dataset is not feasible; indeed, to the best of our knowledge there is no large public dataset of Daikon invariants on real programs. Instead, we created a modestly large dataset of our own.
To do so, we leveraged the Dacapo benchmark (Blackburn et al., 2006). Originally created to benchmark program optimizations (e.g. through better compilers), each project in this benchmark comes with a set of representative workloads designed to execute many of its paths. This is ideal for our case. Practically, although the benchmark comes with a single runner for each project, Daikon could not instrument through the reflective calls that this framework uses. Instead, we manually instrumented and ran 8 projects (details in Table 3) in this suite directly, which, in nearly all cases, involved writing our own “runner” to mimic Dacapo’s instrumentation while calling the requisite project-code directly. We then applied Daikon as usual, running the code under instrumentation first and then producing invariants from the resulting traces. Table 3 summarizes the resulting invariant counts.
We limited the volume of the collected trace data by exponentially decreasing the number of traces for each program point once it was seen sufficiently often (10 times) and excluding many values from tracing, such as those that are not visible from the program point of interest and any nested values with more than three levels. Even then, Daikon required upwards of 30GB of RAM and nearly an hour of processing for the larger projects – much more than our models.
A.2 MODELING DETAILS
A.2.1 PROGRAM GRAPH EXTRACTION FOR JAVA
We used Eclipse’s JDT parser with approximate name-binding resolution to extract five edge types across 3 broad categories of information that are accessible in source code:
• Lexical: every token is connected to its neighbors through next-token edges (and their reverse). This adds additional sensitivity to lexically local information beyond the positional encoding used in the standard Transformer.
• Syntactic: we extract all AST parent-child relations, which provide insight into the hierarchical structure of source code.
• Data-flow: we include three types of data-flow edges: next-use edges, which connect lexically sequential uses of the same variable; computed-from edges, which connect any variable usage to the last value it was assigned, and def-use edges, which connect every variable usage to its (single) original declaration point.
In addition, every edge type has a symmetric, mirrored version (e.g. prev-token), yielding a total of 10 distinct edge kinds used by our model.
6In addition, Daikon cannot instrument JUnit-tested code since it uses reflection, which effectively makes Java tests off-limit.
A.2.2 TRAINING DETAILS
Consistent with recent observations regarding effective modeling of source code vocabulary (Hellendoorn & Devanbu, 2017; Karampatsis et al., 2020), we use Byte-Pair Encoding to create a sub-token vocabulary based on the tokens in our training data. Our vocabulary, estimated from the training data, spans 10,000 sub-tokens; both the input function and the predicted invariant are sub-tokenized using this (reversible) dictionary. Transformer models generally scale in memory needs with the square of the size of their inputs. To ensure that our minibatches are sufficiently large to keep the gradients stable, we restrict our inputs to functions with up to 500 (BPE) tokens and our invariants to 50 tokens (although invariants that long are very rare). With these cut-offs, we train batches of up to 12,500 tokens in parallel across two NVidia RTX Titan GPU’s with 24GB of VRAM each. By packing similarly sized functions per batch, we minimize the overhead from padding and are able to fit ca. 70 functions per batch on average.
A.3 EVALUATION DETAILS
A.3.1 IF-CONDITION LOCALIZATION & REPAIR METRICS
Since some methods have far more program blocks than others, simply ranking all invariants across method boundaries by entropy would lead to bigger methods being highly disproportionally represented. Rather, we try to balance method and invariant level inspection cost by simulating the inspection of 10% of invariants in our dataset from a subset of methods. We do so by first ranking methods by the entropy of their top invariant, from low to high, and then inspecting all invariants from these methods in order until we have inspected 10% of all location/invariant pairs in this dataset (which number 73,738). The 10% inspection (recall) level in Table 1 correspond to a threshold of just 0.0233 bits, under which the average method has 55.3 blocks – substantially more than the average method overall. Separating out the functions with 32 or fewer program points (the mean), the overall accuracy increases to 16.3% and the 10% recall precision increases to 50.0% – the joint task is naturally easier on shorter methods.
A.3.2 GENERATING PRE- AND POST-CONDITIONS WITH BODYGUARD
The comparison with Daikon invariants comes with an important caveat: Daikon only generates method pre- and post-conditions. This means that we cannot perfectly classify the validity of all our invariants. Nevertheless, our experiments on missing conditions show that our models are precise at inferring even very specific missing conditions, which strongly suggests (as our manual analysis has too) that many of its other suggestions are valid as well.
Secondly, our tool produces invariants for any syntactic block of code throughout the method and does not have a general mechanism to indicate that pre- or post-conditions are required. To imitate these for our tool, the closest approximation is to mark the entire method body as needing an invariant when a pre-condition is required and the final (return) statement otherwise. To avoid the complexity of having to match multiple return points, or none at all for void methods, we restrict the latter case to methods with a single return statement only. Note that the latter is an imperfect approximation: our tool only learns to predict guards that precede a statement. A guard that it predicts for a return statement may not be an appropriate substitution for true post-conditions but rather a reason to return at that particular point.
A.3.3 MEASURING OVERLAP WITH DAIKON’S INVARIANTS
We quantify the overlap between our predicted invariants and Daikon’s using normalized Cumulative Gain. This metric captures the quality of a ranker in terms of how often it returns relevant elements; it is traditionally used in information retrieval, for example to evaluate a web searcher. Although discounted cumulative gain is more commonly used, we refrain from penalizing based on “rank” of predictions, because there is no reason to assume that Daikon’s invariants are more salient or relevant than others that we predict. That is, all that matters is that Daikon’s invariants are among our (top 10) predictions.
A.4 FURTHER RESULTS
A.4.1 CHARACTERISTICS OF MANUALLY INSPECTED INVARIANTS
A large portion of the manually verified invariants in Section 4.3 corresponded to fairly trivial statements, such as instanceof assertions for a value being cast to the corresponding type. In some cases, our invariants were more general or accurate than Daikon’s; e.g. when BODYGUARD asserts that an object is not null whereas Daikon asserts that a member of that object is not null. At other times, we inferred invariants that Daikon missed entirely, likely due to limitations in its internal rules and heuristics. For instance, as a pre-condition of:
static ReliableFile getReliableFile(File file) throws IOException { if (file.isDirectory()) {
throw new FileNotFoundException(""); } return new ReliableFile(file);
}
BODYGUARD correctly inferred that !file.isDirectory(), while Daikon only offered file != null.
In another case, our tool produced a more specific invariant for this PMD snippet:
public int getPriority() { return priority; }
Here, Daikon asserts that priority level is exactly either 2 or 3, because those are the only observed values in the (evidently unrepresentative) traces off this method. This indicates how Daikon’s invariants can be inaccurate even with available workloads. BODYGUARD more broadly anticipates that priority >= 0, which matches the method’s actual specification as encoded in its Javadoc documentation (which our tool does not use).
A.4.2 FURTHER EXAMPLES
In the below example,7 a badge variable, initialized to null, is first assigned a value based on program state, and then added to two collections (local and, conditionally, global). This second segment, after the switch statement, should have been guarded by a check that badge != null, since not every case assigns it a value. Across all 53 permutations of code blocks (and countless options per block) in this method, BODYGUARD predicts this condition at the correct location at rank 3. Its first prediction was the nonsensical statement !global as a guard for the entire method body. Possibly, no good prediction was possible for that range, so this option had low entropy by sheer contrast with other possibilities. The second ranked prediction was badge == null for every line after the declaration of badge. While this is tautologically valid as a pre-condition for those lines, it highlights the importance of specificity in range – it is only truly invariant for some of these lines, specifically, the start of each case and the break statement of the latter two, a range that is not currently supported by our approach.
public static void validateTutorial() { Badge badge = null; switch (Dungeon.hero.heroClass) {
case WARRIOR: badge = Badge.TUTORIAL_WARRIOR; break; case MAGE: badge = Badge.TUTORIAL_MAGE; break; case ROGUE: break; case HUNTRESS: break;
7Repaired in https://github.com/00-Evan/shattered-pixel-dungeon/commit/ 475d78cd0599a1d39c4708a91fbb30c95b3f3418
} local.add(badge); if (!global.contains(badge)) {
global.add(badge); saveNeeded = true;
} }
The following snippet8 returns a default image, generating it on the first call. Even though the documentation of createBitMap(int, int, Bitmap.Config)9 does not specify it, this method can return null in rare circumstances, such as when a phone runs out of memory and recovers by aborting this call.10 BODYGUARD correctly infers empty != null as the top invariant, having seen similar calls in other Android projects in its training data. Specifically, it predicts this invariant both for the just the line containing empty.eraseColor (rank 1), and for the block including that and the next line (rank 2). The latter is the more correct segment.
private static Bitmap getDefaultThumbnail() { if (defaultImage == null) {
Bitmap empty = Bitmap.createBitmap(160, 200, Bitmap.Config.ARGB_8888); empty.eraseColor(Color.WHITE); defaultImage = paint(empty);
} return defaultImage;
}
A.5 LIMITATIONS
We evaluated our predictions broadly to assess both their salience and validity. Even so, it is hard to automatically assess all of our invariants, especially those inserted in the middle of methods and those whose vocabulary is outside of what Daikon finds. However, the results on the task of predicting missing if-statements, (which avoids these evaluation problems) are quite encouraging; we believe that this bodes well for the more general settings. Future work may better assess validity of our entire vocabulary of invariants, perhaps by injecting asserts corresponding to our predictions into the source code and executing the tests.
Our second main criterion is salience: our predictions should be particularly relevant to the referenced code, in contrast to prior work. We chose to assess this by using real missing if guards, which would appear to be a good example of particularly salient implicit conditions (as developers chose to make them explicit). We did not quantitatively study other types of salience, such as which conditions are most informative or intuitively obvious to real developers. This, too, may be a fruitful area for future work; human subject studies involving invariants have produced worthwhile insights into developer behavior in the past (Staats et al., 2012).
8Repaired in https://github.com/SufficientlySecure/document-viewer/commit/ 680650556340aa15502e1ec375e4255c1c16fb5b
9https://developer.android.com/reference/android/graphics/Bitmap# createBitmap(int,int,android.graphics.Bitmap.Config)
10As suggested at https://stackoverflow.com/a/14778533. | 1. What is the novel approach proposed by the paper for training a Transformer model?
2. What are the strengths of the paper regarding its contribution to program invariant generation?
3. What are the weaknesses of the paper, particularly in terms of its effectiveness and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Review | Review
Summary: This paper proposes a novel approach for training a Transformer model to predict program invariant. The model is trained using training data that are synthesized from explicit conditional checks in functions and is used to predict invariants of unguarded blocks in similar functions.
Strength
The paper addresses the important and challenging problem of program invariant generation from static code in a scalable way.
Real-world “missing if-guard” bugs are detected using the proposed model.
Weakness
The idea of synthesizing training data by automatically converting explicitly guarded code to its implicitly guarded counterpart is interesting. However, the effectiveness of the trained model to infer program invariants in a general way is not clear from the experimental results. The evaluation with real-world bugs focuses on “missing if-guard” bugs. The difficulty of detecting this bug cannot be understood as there is no accuracy results from an existing tool (for example, Daikon) in detecting this real-world
Although a comparative analysis with Daikon is presented, the presented approach focuses on a narrower class of invariants compared to Daikon. Moreover, Daikon relies on execution traces. A comparison with an existing ML based approach using static code (e.g., [1]) would provide more interesting insights about the model’s accuracy.
A contrastive hinge loss is introduced to address the “syntactically close” but “semantically opposite” cases. However, from Figure-4, it seems the performance of the model is not impacted in a significant way by the loss function.
[1] P. Garg, D. Neider, P. Madhusudan, and D. Roth. Learning Invariants using Decision Trees and Implication Counterexamples.
Question to author: Please address and clarify the cons above. |
ICLR | Title
Learning to Infer Run-Time Invariants from Source code
Abstract
Source code is notably different from natural language in that it is meant to be executed. Experienced developers infer complex “invariants" about run-time state while reading code, which helps them to constrain and predict program behavior. Knowing these invariants can be helpful; yet developers rarely encode these explicitly, so machine-learning methods don’t have much aligned data to learn from. We propose an approach that adapts cues within existing if-statements regarding explicit run-time expectations to generate aligned datasets of code and implicit invariants. We also propose a contrastive loss to inhibit generation of illogical invariants. Our model learns to infer a wide vocabulary of invariants for arbitrary code, which can be used to detect and repair real bugs. This is entirely complementary to established approaches, which either use logical engines that scale poorly, or run-time traces that are expensive to obtain; when present, that data can complement our tool, as we demonstrate in conjunction with Daikon, an existing tool. Our results show that neural models can derive useful representations of run-time behavior directly from source code.
1 INTRODUCTION
Software maintenance requires reading a lot of code. Experienced developers are adept at this, garnering rich semantics just from this “static” (viz, without running the code) inspection to find complex bugs, predict a function’s outputs from its inputs, and learn new coding patterns. They strongly rely on generic assumptions about the program’s run-time behavior; e.g., that a list index never escapes the list bounds and strictly increases. Such “invariants” capture general, yet relevant constraints on the program’s expected run-time behavior.
Automatically inferring invariants can help both developers and tools: first, they can be used to detect bugs where explicit assumptions are incorrect or implicit ones ought to be explicit; second, invariants can guide myriad other tools, such as test-case generators (Artzi et al., 2006). However, inferring invariants is not tractable in general and sound approximations don’t scale beyond very small programs. Instead, popular tools either use dynamic trace data from real executions (esp. Daikon (Ernst et al., 2007)), which requires costly instrumentation, or focuses on highly constrained cases such as loops (Sharma et al., 2013a; Padhi et al., 2016).
Yet this scalability obstacle may be largely artificial. Practical programs rarely take on an exponential range of values (e.g., integers tend to come in a bounded range), and developers seem able to make such inferences without undertaking a project-scale analysis. Rather, they reliably extract them from a local context, using their past experience and cues from the code itself. Consider the snippet in Figure 1: the program on the right uses a time variable, returned from one method and passed to another. Not only is ‘time’ generally non-negative, in this particular case we should not update a position (using moments dx, dy) if no time has passed either. This inference, and many more, can quickly be made from reading just these lines of code. Other times, such implicit inferences should be made explicit: this snippet was later repaired by adding the guard on the left.
Based on this observed symmetry between explicitly guarded code and implicit run-time assumptions about code, we propose a model that learns invariants directly from static code. As developers rarely “assert” invariants in their code, we train this model using a proxy, by automatically converting explicitly guarded code to its implicitly guarded counterpart across millions of functions. The generated programs are constrained to be similar to real functions and used to train a large model with a new loss function that is aware of logical constraints.
Our model, BODYGUARD predicts a rich vocabulary of conditions about arbitrary code from new projects, and can be used to find & fix real missing-guard bugs, such as the one in Figure 1, with over 69% (repair) precision at 10% inspection cost. It also predicts more than two-thirds of Daikon’s invariants that could previously only be inferred with run-time data, and some entirely new ones that can be validated automatically with trace data. Our work presents a significant next step in learned static analysis, being the first to reliably produce natural invariants from arbitrary code alone. More broadly, we show that learned models can implicitly represent behavioral semantics, just from code.
2 OVERVIEW
Inferring invariants for arbitrary programs is NP-hard. Sound approaches using theorem proofers are therefore constrained to restricted settings, such as simple loops (Sharma et al., 2013a), or ones with known inputs (Pham et al., 2017). Such approaches generally don’t scale: needing SMT solvers limits tools to the few program points where invariants can be proven, and ground-truth inputs typically need to be constructed by hand. An alternative is to use execution traces (Ernst et al., 2007): when realistic workloads are available (e.g. from test suites), they generally span entire systems. However, genuinely representative workloads are rare, so trace-based tools often generate poor invariants (Kim & Petersen). A key concern is that none of these have a notion of relevance, or naturalness of the actual statements (Hellendoorn et al., 2019a).
To address these gaps, we propose a learned invariant generator that predicts directly from code, trained with realistic examples. Our central claim is that the natural distribution of programs includes many groups of similar functions, some of which assert run-time assumptions explicitly, and with much detail, while others vary along these dimensions. As Figure 1 highlights, it is common for code not to state salient conditions (time > 0, on the right) that developers may naturally intuit, while other times (e.g. in a later revision, on the left), such conditions are explicitly checked. If this distributional assumption holds in general, then we can use explicit conditional checks that guard blocks in functions to teach our models about the implicit invariants of unguarded blocks in similar functions. Furthermore, we conjecture that in such comparable samples, the condition is both salient (since it is checked explicitly) and natural (since it is written by humans). Learning from such examples is thus a very appropriate training signal for inferring practically useful invariants.
Figure 2 illustrates our data generation: we find explicitly guarded blocks in functions that can be removed without substantially perverting the program, and convert these checked cases to implicit ones (Section 3.1). We garner a large aligned dataset to learn to predict the reverse of this mapping, training a Transformer-style model for code, augmented with a loss that encourages sampling logical conditions (Section 3.2). This model, nick-named BODYGUARD, works on any (Java) function, quickly adapting to the local vocabulary and semantics, and has a natural inclination to generate realistic, salient invariants that are often valid (Section 4). This result fits in a long line of observations that programming is remarkably predictable, including in its syntax (Hindle et al., 2012) and execution values (Tsimpourlas et al., 2020), likely by developers’ design, to control the complexity of the task (Casalnuovo et al., 2019). Yet none of these relate code and its execution directly, as we do through translating the former into general, intuitively meaningful statements about the latter.
Condition
3 APPROACH
Training and evaluating this approach required a substantial experimental setup: we collect three datasets for three types of evaluations and introduce an improved loss function. This section describes the data collection, evaluation, and modeling setup generally; Appendices A.1 and A.2 provide additional details on our datasets and modeling architecture, respectively. Our benchmark datasets, code, and models are available at http://omitted.link.
3.1 DATASETS
To train BODYGUARD, we generate ca. 2.5 million aligned invariant/function samples from methods with if-statements. We extract these from top-starred Java projects from Github, which we split at the organization level into training (920 projects), held-out (19 projects), and test data (61 projects). Each file was parsed to extract all its methods, from which we generate one sample for each (sideeffect free) if- (or if-else-)statement by removing said guard and storing its condition. This produces an equivalent code fragment in which the statement’s condition is presumed to either be always true (if its body is kept) or false (otherwise). Correspondingly, the omitted condition (or its negation) becomes an invariant on the remaining code. The resultant sample contains the entire method (minus conditional check) as context, with the range of tokens where the invariant condition applies indicated.
We train our model to generate run-time conditions for any indicated segment of code in Java functions. We evaluate its ability to do so in two settings: 1. identifying and repairing missing explicit if-guards, collected from real bug reports, and 2. measuring the validity of our predicted invariants using trace data, collected with Daikon (Ernst et al., 2007). For the first, we collect a dataset of real missing if-condition bugs from across the history of 10K Java projects by parsing all the revisions in these projects’ histories and selecting for changes that a) introduce a single if-statement to guard previously un-guarded code, and b) are described as a bug-fixing change (see Appendix A.1.3 for details). We find ca. three thousand of these. For the second evaluation, we use Daikon to collect execution trace data from a smaller set of eight projects that we manually instrumented. We then compare our predictions to both those generated by Daikon, to measure overlap, and to the collected traces directly, to assess the validity of the invariants that we uniquely generate. This helps us understand the inference gap between static and dynamic information; i.e., is run-time data (when present) strictly more useful than code, or are the two information sources orthogonal?
3.2 MODEL SETUP
Discovering invariants is non-trivial even for experienced developers, so we both equip our models with substantial capacity and training time, and design to prioritize precision over recall. Figure 3 shows an overview of the architecture, inputs and outputs of our model.
3.2.1 ARCHITECTURE
We base our architecture on the Transformer (Vaswani et al., 2017), amplified with the relation attention mechanism from Hellendoorn et al. (2020). While standard (lexical) language models are quite useful for code, Allamanis et al. (2018) and others have shown that utilizing syntatic & semantic information such as the AST, or control/data-flow relations, outperforms text-only models. Hellendoorn et al. (2020) propose a Transformer-based architecture that handles such relations but is faster to train and more powerful than graph neural networks (Allamanis et al., 2018). Their model relies on an added attention bias brij , injected into the query-key comparison of the Transformer’s conventional scaled dot-product attention: eij = (qi+brij)kj >/ √ N . This bias is sensitive to known relations r between tokens i and j (if any, and summed together if more than one), allowing the model to selectively sharpen (or dampen) the significance of each relation. We adopt this model for our work, specifically with 512-dimensional hidden states, 64-dimensional relational embeddings, 8 attention heads, and 8 layers, totaling ca. 67M parameters.
Our model uses relational information in the form of program graphs. A program graph extractor has been released for C# code (Allamanis et al., 2018), but not yet for Java, so we created our own. Specifically, we extract 5 commonly used edge types, all bi-directional, reflecting common lexical, syntactic, and semantic relations in programs (detailed in Appendix A.2.1). We use the same “leaves-only” representation as Hellendoorn et al. (2020) to limit the size of our inputs by not including non-terminal AST nodes, but instead rerouting edges that connect such nodes to representative syntax token (e.g. from an if-statement node to its “if” token in the code). Finally, to ensure that our decoder is aware of the specified range of code tokens where the invariant applies, we also leverage the relational mechanism between the decoder and encoder, using a simple unary relation (i.e., that a token is part of the invariant’s range) between the generated tokens and input tokens.
3.2.2 DECODING LOGICAL STATEMENTS
We synthesize training data using a proxy for invariants, which necessarily introduces some bias towards characteristics of if-conditions (and the code they guard) that is incompatible with true invariants. Most notably, in code, small syntactic differences lead to drastic changes in run-time
behavior. It is common for if-else statements to have quite similar bodies, for which we generate two samples: one with the if-condition as an invariant for the if block, and one with its logical negation for the else block. This approach tends to produce very similar code fragments with very similar, but logically opposite (e.g. ‘!= null’ vs. ‘== null’) conditions.
We supervise our model to encourage its representations for syntactically close but semantically opposite statements to be distinct by introducing a contrastive hinge loss term. For every training sample, we produce the logical negation of the invariant and require the decoder to produce that negation with a much higher entropy than the original. Concretely, given a statement inv comprised of tokens ti and a negating function neg, we use the regular cross-entropy loss LCE :
LCE(inv) = − |inv|∑ i=1 log prob(ti | t1 · · · ti−1, context)
to compute the entropy distance w.r.t. its negation:
∆inv = LCE(neg(inv))− LCE(inv) Lhinge(inv) = max (0,∆inv − )2
in which is the minimum desired entropy “distance” in bits. In this work, we set = 2. For this hinge-loss model, as we will call it in the rest of this paper, we train with a loss equal toLseq+Lhinge.
4 ANALYSIS
We first assess our model’s precision/recall behavior on our automatically collected corpus; then, we apply it to a promising down-stream task: missing if-guard repair (and detection), which further helps us assess the models’ sensitivity to salient invariants. Finally, we use trace data to get a measure of our invariants’ validity and contrast it with an execution-based tool.
4.1 CORPUS DATA
We sample our two models’ held-out performance every 100,000 samples while training,1 leading to the learning curves shown in Figure 4a. The base model saturates earlier than the one employing a contrastive hinge loss, as the latter faces the more challenging task of distinguishing between very similar statements. However, after ca. one week of training, both models converge to approximately the same quality. It speaks to the challenge of the task that the models only reach ∼30% accuracy, due in part to the enormously diverse vocabulary of statements that occurs across our corpus, and to the inherent ambiguity of generating a single invariant when multiple valid options are available (as
1A full epoch is approximately 2.3M samples for the base model and twice that for the hinge-loss models
we will study later). This task clearly stretches our current models of code to their limits, making it a promising new task to pursue in order to improve our models.
We evaluate each model at the step with their highest held-out accuracy on the test data, where we compare the top generated invariant (from beam search, size = 25) to the ground truth. Figure 4b shows the precision/recall behavior of the two models in the high precision range, which is generally much more useful to developers than high recall. We rank predictions by their entropy: an invariant that is highly likely to be sampled from its context is likely correct. Both models respond strongly to this entropy threshold, becoming especially far more precise when entropy values drop below 1.0 (around 40% recall), and converging to (near) perfect precision, at a commensurate expense of recall. Both break 80% precision at nearly 20% recall, which still accounts for tens of thousands of program points across our test projects alone. Going forward, we use the hinge loss model, which has the better precision-recall trade-off, and prioritize precision over recall.
4.2 MISSING IF DETECTION
Using the ∼3K real missing if-guard bugs collected from project histories (see Section 3.1), we first measure our model’s accuracy and precision at predicting this guard from the localized bug in the top row of Table 1. This most directly related to its training signal, where we provided our model with the location of the code guarded by the targeted invariant. Our model achieves a similar overall accuracy here (ca. 29.3%) as on our general test data.2, and precision at 10% recall is also quite high (69.1%), allowing us to fix 215 out of 311 bugs at that level once located. That these tasks appear to be comparably “hard” is relevant; automatically synthesized training data is often overly easy compared to real tasks, which harms generalization (Hellendoorn et al., 2019b).
We also care about our model’s sensitivity to salience: the missing condition in these samples is (arguably) the most important invariant in the entire method, not just the indicated code block. Our model should be able to detect this given how it was trained. This contrasts with tools like Daikon Ernst et al. (2007), which emit all logically valid invariants, many of which irrelevant (Hellendoorn et al., 2019a). The next three rows of Table 1 show the results of running our invariant generator on every contiguous segment (up to 5 blocks) of code in each buggy method, ranking the top invariants across segments for inspection. This is substantially harder than the previous task, reducing the overall accuracy threefold and roughly halving precision. Nevertheless, that is still much better than might be expected if BODYGUARD had no location-sensitivity: we test over 30 blocks per method on average. We also show that the top prediction often matches some aspect of the correct answer, especially the position, and often predicting the correct invariant at another (nearby) block of code.
Finally, we note that the other (low entropy) invariants predicted here are often not at all “incorrect”; from cursory inspection, many are valid, meaningful statements. We study their validity next.
4.3 VALIDITY AND OVERLAP WITH DAIKON
Learning invariants just from code stands in sharp contrast to most current approaches in this field, prominently including Daikon (Ernst et al., 2007), which learns invariants from execution trace data instead. Collecting trace data requires instrumenting projects and access to diverse, representative workloads. This makes it much harder to apply to arbitrary code than our approach but has the benefit of offering stronger guarantees of correctness. Comparing our model with Daikon in projects where this information is available thus allows for two useful evaluations. First, we can lower-bound
2The base model (trained without hinge loss) reached 26.8% accuracy.
our tool’s true-positive rate by determining how often it replicates Daikon’s own invariants, which we tentatively deem “safe” because they hold on all observed traces and have passed a significance test.3 Second, we can use this trace data directly to determine the validity of (a subset of, see Appendix A.3.2) our invariants that do not overlap with Daikon’s.
Figure 5a shows the first result: the frequency with which our invariants overlap with Daikon’s, again plotted against recall, where the points correspond to entropy threshold ranging from 1e-4 to 10. Evidently, pre-conditions are easier to predict for our model, likely because it has no real notion of post-conditions (see Appendix A.3.2). Even so, our tool can retrieve more than two-thirds of Daikon’s invariants at a respectable 10% recall from static code alone, which is quite promising.
We generate 10 invariants per program point using beam search, so even at a low entropy threshold we produce many pre- and post-condition that Daikon does not (those either out of its vocabulary, or with too few observations). It is reasonable to expect many of these to be valid given previous results. Since Daikon does not provide a means of validating a plain-text invariant, we wrote a simple logical engine that parses Daikon’s trace data files and compares a number of categories of our invariants against the recorded values, such as array length, string equivalence, instanceof checks, etc. Using this approach, we are able to validate ca. 40% (12K) of our emitted invariants, resulting in the validities summarized in Figure 5b. In short, our invariants at full recall are valid ca. 60% of the time, and this validity ratio greatly increases as we sharpen the entropy threshold, to over 80%, at recall values under 10%.
Many of these validated invariants were not produced by Daikon, implying that static and dynamic data are orthogonal for this task. We collected the 708 pre-conditions that BODYGUARD generates at an entropy of ≤0.1; of these, 540 could be checked automatically with trace data, yielding 449 valid and 91 invalid cases. We manually inspected the 168 remaining cases and found that most (122) were valid, but Daikon’s tracer simply did not record the information needed to predict these.4 Overall, this suggests that more than 80% of our invariants at this recall level (3.5%) are correct, and more than two-thirds of the invalid remainder could be ruled out using trace data, if available, leaving a false positive rate of just 6.5% (46/708) when execution data is available (while also adding about 200 valid invariants to Daikon’s own predictions). This supports our belief that our tool is largely orthogonal to, and usefully synergistic with, dynamic, trace-based invariant generators.
3Though in practice it generates a fair number of spurious statements still. 4Some of these were correct statements but not proper pre-conditions, e.g. invariants about a variable declared at the first line of the function. This is an artifact of our training setup, which has no explicit notion of method-level pre-conditions. We marked these as invalid for this analysis.
5 RELATED WORK
Automatically inferring invariants is usually approached either in constrained settings where some “checker” (e.g. an SMT solver) or ground-truth is available, or under the assumption that we have access to execution traces from realistic workloads. Among the first, Sharma et al. (2013b) find algebraic (polynomial) invariants by solving a system of linear equations with an SMT solver and using counterexamples to create new test inputs. Sharma et al. (2013a) use PAC-learning to learn integer loop invariants on programs with a single loop, trained by contrasting passing and failing test cases. Padhi et al. (2016) learn pre-conditions and loop invariants as boolean combinations of arithmetic conditions (“features"), which they synthesize by generating and testing all features up to a size cutoff. This approach is agnostic to the program structure, as is Pham et al. (2017), who use a fixed set of feature templates over state vectors to learn linear inequalities that classify passing and failing state vectors, requiring both post-conditions and passing and failing tests to be in place. In contrast, our work makes no assumptions about the code other than the availability of a parser. In settings where an SMT solver (or test cases) is available, it could be used to filter invalid invariants generated by BODYGUARD.
Among machine learning based approaches, Si et al. (2018) use policy-learning to teach a GNN to generate loop invariants in cooperation with an SMT solver (Z3), which provides intermediate rewards (through counterexamples) to finesse the sparsity of the eventual reward (the final validity of the invariant). A second reward is added to reject “meaningless" and “trivial" predicates such e == e or e < e. Besides not requiring an SMT solver, our approach learns notions like “relevant” and “natural” directly from real code. Relatedly, Brockschmidt et al. (2017) also use GNNs to induce invariants over data structures, using a similar approach of generating invariants (in separation logic) supervised by data produced from test runs. The production is based on hand-engineered features over the data-structure graphs. Both these approaches may be symbiotic with ours where tests or logical constraints are known, although they consider different classes of invariants.
Daikon (Ernst et al., 2007) belongs to the second class of invariant predictors, leveraging execution traces from realistic inputs to infer a large vocabulary of method pre- and post-conditions. This general applicability has led to its frequent as a basis for other tools, often to generate an initial corpus of invariants for tasks such as automated patching (Perkins et al., 2009) and test generation (Artzi et al., 2006; Pacheco & Ernst, 2005). However, truly representative inputs are rare, and using incomplete data risks generating many irrelevant or invalid invariants. Polikarpova et al. (2009) found that the size of the test suite affects the validity of generated invariants on Eiffel programs. Kim & Petersen anecdotally note various issues with Daikon’s invariants on large, C++ systems, such as a high degree of false positives and few insightful invariants. Hellendoorn et al. (2019a) similarly observe (on hand-annotated C# functions) few relevant and valid invariants based on executions from unit test. Our approach learns directly from natural conditions to generate relevant and generalizable conditions, and when trace data is present, it can be used to filter out invalid invariants.
6 CONCLUSION
We conjectured that typically used invariants are in a sense natural, like many other aspects of programs (Hindle et al., 2012; Barr et al., 2013; Tsimpourlas et al., 2020), and therefore predictable, intentionally written in standardized ways for ease of reading and writing Casalnuovo et al. (2019). Our results support this claim: both explicit (if-statements) and implicit (invariants) conditions pertaining to code can be predicted precisely, and with high validity from code reading alone, facilitated by our proposed data generation approach and loss function. As a result, we can generate many invariants that were previously only accessible through trace data (and more), which greatly increases the reach and applicability of invariant inference.
This finding has broad implications: our tool can provide valuable semantic insights both to developers, e.g. to aide debugging efforts or facilitate code understanding, and to other tools, many of which struggle to navigate an exponentially large search space of programs. Our tool can help bias this search space using highly likely assertions, which could greatly improve the range and quality of solutions found by downstream applications. In summary, our novel approach learns to reason about program state by synthesizing training data from if-conditions; this empowers BODYGUARD to reliably generate useful invariants entirely from static code.
A APPENDIX
A.1 DATA COLLECTION DETAILS
We base our evaluation on a Java dataset consisting of the top 10,000 most-starred Java projects on Github, collected March 30th, 2020 using the Github v3 API. Since generating our training data samples is quite expensive, we used just the top 1,000 (most starred) of these projects to automatically generate training and evaluation samples for the results described in Section 4.1. This dataset was split between training, held-out and evaluation sets at the organization level to ensure minimal duplication, as projects within the same organization often share many coding patterns (Allamanis, 2019). We allocated 95% of organizations (920 projects) to training data, 2% to held-out data (19 projects), and 3% to test data (61 projects), to assess the final trained models.
A.1.1 INVARIANT GENERATION
We parse each file using Eclipse’s JDT parser and extract all (non-nested) methods from the resulting parse tree. Within each method, we detect all if-statements, removing all those whose conditions contain side-effects (such as assignments, increment/decrement operators, and non-whitelisted methods, see Appendix A.1.2), and those whose body contains a control-flow altering statement (e.g. return, throw) unless it is the sole statement.5 For the remainder, we generate samples based on the following types of if-statements:
Simple if-statements: these include samples like Figure 1, in which a single if-statement guards a simple body with no control-flow altering code.
If-else statements: for these we generate two samples: one in which we remove the else block entirely and generate an if-invariant as above, and one in which we negate the condition and generate an invariant for just the else block. Note that else if statements in Java are treated as nested statements and thus handled the same way.
Control-flow altering if-statements: any if-statement whose body prevents the execution of subsequent code, by containing just a return, break, continue, or throw (Exception) statement, is treated as declaring an invariant (namely, the negation of the if-condition) for the ensuing code.
In all cases, the surrounding context is the entire method, and the range of tokens to which the condition applies (namely, those that used to be guarded) is stored with the sample. We generate samples for all these conditions, producing a new sample for every if-statement. This ensures that each sample minimally alters the original code, which reduces the risk that we produce unnatural code (which would harm the generalization of our model). As such, a method can produce many samples, so functions with many conditions will be represented proportionally more often. We do not consider that problematic, as 1. long functions tend to have correspondingly more invariants, so the increased emphasis should be beneficial to our model, and 2. we anyways cap our training samples to only modestly large functions (up to 500 sub-tokens, which typically translates to the order of 20 lines), due to memory constraints.
A.1.2 PRODUCING NATURAL FUNCTIONS
Not all if-guards can be removed without changing the semantics of the code; conditions can have side-effects. This includes assignments (e.g. if ((x = y) != null)), certain operators (viz. ++ and --) and method calls with side-effects. To ensure that the converted code is semantically coherent, and because invariants should not have side effects anyways, we omit all such cases. Many method calls do not have side effects, so to avoid limiting our dataset too much, we heuristically select a large, but relatively “safe” set of these based on common coding patterns. This includes common “getter” methods, java.lang.Math calls, object equality tests, collection inspection methods, such as inclusion checks (e.g. ‘contains’, ‘has’) and size-related methods, and a few miscellaneous others that were common in our training data (e.g. parseInt, name). The regexes used to detect these various types of methods are listed in Table 2.
Removing if-statements does not always yield meaningful code, consider: int foo(int x, int y) {
if (x > y) { return x; } return y;
}
5When an if-statement body terminates the current branch of execution only after first executing some other code, generating equivalent unguarded code is complicated: inlining the guarded code (minus the final statement) would often produce very unnatural code, as it tends to involve some form of error-recovery, such as logging or resetting a value. Omitting the entire block instead, as we do for simple control-flow altering statements may be more appropriate; future work can explore this, and various other, corner cases to generate more samples.
If we remove the conditional check, the resulting method is left with just two consecutive return statements, which is invalid in Java. This particular case would trigger a compiler error, but not all inappropriate removals do: if the if-body had instead assigned y = x + 1;, removing it would result in y always being assigned x + 1 before returning, making the parameter useless. Not using a parameter is not erroneous by definition, since the method foo may be inherited (or overriden in a subclass) and other instantiations do make use of it, so Eclipse’s parser just emits a warning. Since both these cases result in code that is both unrepresentative of typical Java, and would yield highly predictable invariants, we additionally reparse each resulting function after removal of the targeted if-statement and discard any changes that trigger compilation warnings and errors.
Specifically, Eclipse JDT requires full type resolution to guarantee correct program analysis and stops checking for violations if it finds compile-time errors from missing types. When processing as many projects as we do (many of which cannot be built automatically), we cannot soundly resolve all dependencies for each project. As a close approximation, we instead parse each function in its entire project context to allow as much heuristic type resolution as possible. Then, we look for any increase in warnings and errors between the method before and after removing an if-statement. This reduces the number of collected samples and increases the time to generate the dataset (to ca. 200 CPU hours for 1K projects), but also increases its validity by eliminating many inappropriate fragments.
Finally, we limit our functions to those having 500 (sub-)tokens or less to facilitate a reasonable modeling throughput. This does not reduce the dataset by much; most functions tend to fit this limit. In total, we collect ca. 2.34M training samples, 12.1K held-out samples and 101K test samples, with approximately 200 sub-tokens per function on average.
A.1.3 COLLECTING “MISSING IF” BUGS
We collect our dataset of missing if-condition bugs from across the history of all the aforementioned 10K projects in our dataset. For each project, we parsed every commit to the main branch, using git’s “diff” function to identify cases in which the sole addition was to wrap one or more existing statements in an if-statement. This yielded 32,471 samples from across 8,174,552 commits. Although all of these may constitute interesting samples, we prioritize bug-detection for now as the most direct application of our model. To ensure that our collected samples are likely bug-related, we focus only on the ca. 3.7K cases in which the entire commit introduced just a single if-statement in a single Java file and the corresponding commit message contained any of the common bug-related terms such as “fix”, “bug”, and “fault” (Ray et al., 2016). We additionally filtered out any commits to projects that were included in our training dataset to avoid the risk of overlap (which need not be present as many commits reflect now out-dated code), yielding 3,146 samples in total.
Project Methods Invariants
A.1.4 RUNNING DAIKON
Comparing our tool to Daikon (Ernst et al., 2007) required some adaptations. Daikon requires projects that are fully built, instrumentable, and have representative workloads. Unit tests are often insufficient because they test for both appropriate and inappropriate values (e.g. those triggering an exception), which is counter to our purpose.6 Scaling Daikon to our aforementioned dataset is not feasible; indeed, to the best of our knowledge there is no large public dataset of Daikon invariants on real programs. Instead, we created a modestly large dataset of our own.
To do so, we leveraged the Dacapo benchmark (Blackburn et al., 2006). Originally created to benchmark program optimizations (e.g. through better compilers), each project in this benchmark comes with a set of representative workloads designed to execute many of its paths. This is ideal for our case. Practically, although the benchmark comes with a single runner for each project, Daikon could not instrument through the reflective calls that this framework uses. Instead, we manually instrumented and ran 8 projects (details in Table 3) in this suite directly, which, in nearly all cases, involved writing our own “runner” to mimic Dacapo’s instrumentation while calling the requisite project-code directly. We then applied Daikon as usual, running the code under instrumentation first and then producing invariants from the resulting traces. Table 3 summarizes the resulting invariant counts.
We limited the volume of the collected trace data by exponentially decreasing the number of traces for each program point once it was seen sufficiently often (10 times) and excluding many values from tracing, such as those that are not visible from the program point of interest and any nested values with more than three levels. Even then, Daikon required upwards of 30GB of RAM and nearly an hour of processing for the larger projects – much more than our models.
A.2 MODELING DETAILS
A.2.1 PROGRAM GRAPH EXTRACTION FOR JAVA
We used Eclipse’s JDT parser with approximate name-binding resolution to extract five edge types across 3 broad categories of information that are accessible in source code:
• Lexical: every token is connected to its neighbors through next-token edges (and their reverse). This adds additional sensitivity to lexically local information beyond the positional encoding used in the standard Transformer.
• Syntactic: we extract all AST parent-child relations, which provide insight into the hierarchical structure of source code.
• Data-flow: we include three types of data-flow edges: next-use edges, which connect lexically sequential uses of the same variable; computed-from edges, which connect any variable usage to the last value it was assigned, and def-use edges, which connect every variable usage to its (single) original declaration point.
In addition, every edge type has a symmetric, mirrored version (e.g. prev-token), yielding a total of 10 distinct edge kinds used by our model.
6In addition, Daikon cannot instrument JUnit-tested code since it uses reflection, which effectively makes Java tests off-limit.
A.2.2 TRAINING DETAILS
Consistent with recent observations regarding effective modeling of source code vocabulary (Hellendoorn & Devanbu, 2017; Karampatsis et al., 2020), we use Byte-Pair Encoding to create a sub-token vocabulary based on the tokens in our training data. Our vocabulary, estimated from the training data, spans 10,000 sub-tokens; both the input function and the predicted invariant are sub-tokenized using this (reversible) dictionary. Transformer models generally scale in memory needs with the square of the size of their inputs. To ensure that our minibatches are sufficiently large to keep the gradients stable, we restrict our inputs to functions with up to 500 (BPE) tokens and our invariants to 50 tokens (although invariants that long are very rare). With these cut-offs, we train batches of up to 12,500 tokens in parallel across two NVidia RTX Titan GPU’s with 24GB of VRAM each. By packing similarly sized functions per batch, we minimize the overhead from padding and are able to fit ca. 70 functions per batch on average.
A.3 EVALUATION DETAILS
A.3.1 IF-CONDITION LOCALIZATION & REPAIR METRICS
Since some methods have far more program blocks than others, simply ranking all invariants across method boundaries by entropy would lead to bigger methods being highly disproportionally represented. Rather, we try to balance method and invariant level inspection cost by simulating the inspection of 10% of invariants in our dataset from a subset of methods. We do so by first ranking methods by the entropy of their top invariant, from low to high, and then inspecting all invariants from these methods in order until we have inspected 10% of all location/invariant pairs in this dataset (which number 73,738). The 10% inspection (recall) level in Table 1 correspond to a threshold of just 0.0233 bits, under which the average method has 55.3 blocks – substantially more than the average method overall. Separating out the functions with 32 or fewer program points (the mean), the overall accuracy increases to 16.3% and the 10% recall precision increases to 50.0% – the joint task is naturally easier on shorter methods.
A.3.2 GENERATING PRE- AND POST-CONDITIONS WITH BODYGUARD
The comparison with Daikon invariants comes with an important caveat: Daikon only generates method pre- and post-conditions. This means that we cannot perfectly classify the validity of all our invariants. Nevertheless, our experiments on missing conditions show that our models are precise at inferring even very specific missing conditions, which strongly suggests (as our manual analysis has too) that many of its other suggestions are valid as well.
Secondly, our tool produces invariants for any syntactic block of code throughout the method and does not have a general mechanism to indicate that pre- or post-conditions are required. To imitate these for our tool, the closest approximation is to mark the entire method body as needing an invariant when a pre-condition is required and the final (return) statement otherwise. To avoid the complexity of having to match multiple return points, or none at all for void methods, we restrict the latter case to methods with a single return statement only. Note that the latter is an imperfect approximation: our tool only learns to predict guards that precede a statement. A guard that it predicts for a return statement may not be an appropriate substitution for true post-conditions but rather a reason to return at that particular point.
A.3.3 MEASURING OVERLAP WITH DAIKON’S INVARIANTS
We quantify the overlap between our predicted invariants and Daikon’s using normalized Cumulative Gain. This metric captures the quality of a ranker in terms of how often it returns relevant elements; it is traditionally used in information retrieval, for example to evaluate a web searcher. Although discounted cumulative gain is more commonly used, we refrain from penalizing based on “rank” of predictions, because there is no reason to assume that Daikon’s invariants are more salient or relevant than others that we predict. That is, all that matters is that Daikon’s invariants are among our (top 10) predictions.
A.4 FURTHER RESULTS
A.4.1 CHARACTERISTICS OF MANUALLY INSPECTED INVARIANTS
A large portion of the manually verified invariants in Section 4.3 corresponded to fairly trivial statements, such as instanceof assertions for a value being cast to the corresponding type. In some cases, our invariants were more general or accurate than Daikon’s; e.g. when BODYGUARD asserts that an object is not null whereas Daikon asserts that a member of that object is not null. At other times, we inferred invariants that Daikon missed entirely, likely due to limitations in its internal rules and heuristics. For instance, as a pre-condition of:
static ReliableFile getReliableFile(File file) throws IOException { if (file.isDirectory()) {
throw new FileNotFoundException(""); } return new ReliableFile(file);
}
BODYGUARD correctly inferred that !file.isDirectory(), while Daikon only offered file != null.
In another case, our tool produced a more specific invariant for this PMD snippet:
public int getPriority() { return priority; }
Here, Daikon asserts that priority level is exactly either 2 or 3, because those are the only observed values in the (evidently unrepresentative) traces off this method. This indicates how Daikon’s invariants can be inaccurate even with available workloads. BODYGUARD more broadly anticipates that priority >= 0, which matches the method’s actual specification as encoded in its Javadoc documentation (which our tool does not use).
A.4.2 FURTHER EXAMPLES
In the below example,7 a badge variable, initialized to null, is first assigned a value based on program state, and then added to two collections (local and, conditionally, global). This second segment, after the switch statement, should have been guarded by a check that badge != null, since not every case assigns it a value. Across all 53 permutations of code blocks (and countless options per block) in this method, BODYGUARD predicts this condition at the correct location at rank 3. Its first prediction was the nonsensical statement !global as a guard for the entire method body. Possibly, no good prediction was possible for that range, so this option had low entropy by sheer contrast with other possibilities. The second ranked prediction was badge == null for every line after the declaration of badge. While this is tautologically valid as a pre-condition for those lines, it highlights the importance of specificity in range – it is only truly invariant for some of these lines, specifically, the start of each case and the break statement of the latter two, a range that is not currently supported by our approach.
public static void validateTutorial() { Badge badge = null; switch (Dungeon.hero.heroClass) {
case WARRIOR: badge = Badge.TUTORIAL_WARRIOR; break; case MAGE: badge = Badge.TUTORIAL_MAGE; break; case ROGUE: break; case HUNTRESS: break;
7Repaired in https://github.com/00-Evan/shattered-pixel-dungeon/commit/ 475d78cd0599a1d39c4708a91fbb30c95b3f3418
} local.add(badge); if (!global.contains(badge)) {
global.add(badge); saveNeeded = true;
} }
The following snippet8 returns a default image, generating it on the first call. Even though the documentation of createBitMap(int, int, Bitmap.Config)9 does not specify it, this method can return null in rare circumstances, such as when a phone runs out of memory and recovers by aborting this call.10 BODYGUARD correctly infers empty != null as the top invariant, having seen similar calls in other Android projects in its training data. Specifically, it predicts this invariant both for the just the line containing empty.eraseColor (rank 1), and for the block including that and the next line (rank 2). The latter is the more correct segment.
private static Bitmap getDefaultThumbnail() { if (defaultImage == null) {
Bitmap empty = Bitmap.createBitmap(160, 200, Bitmap.Config.ARGB_8888); empty.eraseColor(Color.WHITE); defaultImage = paint(empty);
} return defaultImage;
}
A.5 LIMITATIONS
We evaluated our predictions broadly to assess both their salience and validity. Even so, it is hard to automatically assess all of our invariants, especially those inserted in the middle of methods and those whose vocabulary is outside of what Daikon finds. However, the results on the task of predicting missing if-statements, (which avoids these evaluation problems) are quite encouraging; we believe that this bodes well for the more general settings. Future work may better assess validity of our entire vocabulary of invariants, perhaps by injecting asserts corresponding to our predictions into the source code and executing the tests.
Our second main criterion is salience: our predictions should be particularly relevant to the referenced code, in contrast to prior work. We chose to assess this by using real missing if guards, which would appear to be a good example of particularly salient implicit conditions (as developers chose to make them explicit). We did not quantitatively study other types of salience, such as which conditions are most informative or intuitively obvious to real developers. This, too, may be a fruitful area for future work; human subject studies involving invariants have produced worthwhile insights into developer behavior in the past (Staats et al., 2012).
8Repaired in https://github.com/SufficientlySecure/document-viewer/commit/ 680650556340aa15502e1ec375e4255c1c16fb5b
9https://developer.android.com/reference/android/graphics/Bitmap# createBitmap(int,int,android.graphics.Bitmap.Config)
10As suggested at https://stackoverflow.com/a/14778533. | 1. What is the main contribution of the paper regarding program invariant inference?
2. What are the strengths and weaknesses of the proposed technique, particularly in its application to different programs?
3. How does the reviewer suggest improving the paper, especially in clarifying the types of predicted conditions?
4. What references does the reviewer provide that may be helpful in enhancing the paper's content?
5. Are there any questions or concerns regarding the paper's framing, technical details, or examples that the reviewer would like the authors to address? | Review | Review
Summary
The paper presents a technique for inference of certain kinds of program invariants directly from the program’s source code. The basic idea is to treat conditional statements as hints for facts about the program state that should hold at a given program point.
Strengths
This is a challenging problem and the paper shows some successful examples. The idea that some useful invariants can be inferred based on local information, while not new, is interesting and can lead to follow up work of practical value.
The contrastive hinge loss of syntactically close but semantically opposite statements is interesting.
Weaknesses
The paper falls short on the framing of the invariant inference problem, and on the technical details of what does it mean to infer a meaningful local invariant. Starting from trivialities like the fact that the problem is generally undecidable (and not as stated in Section 2), through the use of incorrect terminology for invariants, guards, pre/post conditions, etc. This just makes the paper hard to follow.
Fundamentally, beyond simple invariants (array bounds, nullness checks) it is not clear why program invariants would generalize well across different programs. The exception is of course the use of libraries and invariants in library contracts (as learned in [PLDI19a, PLDI19b]). For nullness guards, you should take a look at [https://arxiv.org/pdf/1902.06111.pdf]. I think it would improve the paper if you could focus on a certain kind of invariants, and show that these invariants can in fact generalize across programs.
As a concrete example, take your own Figure 1. Assuming that these are two different programs, there is no reason to assume that the contract of calculateTime() remains the same. Had calculateTime() been part of some standard library shared between programs, the case for generalization would have been much stronger.
There has been so much work on static inference of invariants that it is impossible to list even all the closely related work. Some things that are worth looking into are the work on Scalable static analysis [Scaling], the inference of necessary preconditions [Logozzo], and bug detection that is based on "belief" [deviant, belief], which is closely related to your intuition about naturalness and human-written invariants. Also helpful to look at [loopInvariant] and the related work mentioned there.
Comparison to Daikon. As you correctly point out, Daikon infers likely pre/postconditions. The description of how you compare your invariants to those inferred by Daikon is not clear unless all relevant cases related to (pre)conditions on method parameters.
Questions for Authors
It would be helpful to see more characteristics of the real missing if conditions that you have collected. I am wondering if these are simple conditions of the kind of missing nullness checks or missing array-bound checks. The way in which you have collected these samples is likely to create a bias towards simple missing conditions. How many terms are in these conditions? How many of them are nullness checks? How many are array-bound checks? How many include simple string operations and/or other simple method calls as implied by Table 2?
Improving the Paper
I liked the idea of removing conditionals to infer likely necessary preconditions. It would help to clarify when what you predict is a guard, a precondition, an invariant, or something else.
You are clearly not trying to infer any loop invariants, and it would help clarify that upfront.
References
[PLDI19a] Scalable taint specification inference with big code https://dl.acm.org/doi/10.1145/3314221.3314648
[PLDI19b] Unsupervised learning of API aliasing specifications https://dl.acm.org/doi/10.1145/3314221.3314640
[Scaling] Scaling static analyses at Facebook https://dl.acm.org/doi/10.1145/3338112
[Logozzo] Automatic inference of necessary preconditions https://link.springer.com/chapter/10.1007/978-3-642-35873-9_10
[deviant] Bugs as deviant behavior: a general approach to inferring errors in systems code https://dl.acm.org/doi/10.1145/502034.502041
[belief] Static error detection using semantic inconsistency inference https://dl.acm.org/doi/abs/10.1145/1250734.1250784
[loopInvariants] Learning Loop Invariants for Program Verification http://papers.nips.cc/paper/8001-learning-loop-invariants-forprogram-verification |
ICLR | Title
Contemplating Real-World Object Classification
Abstract
Deep object recognition models have been very successful over benchmark datasets such as ImageNet. How accurate and robust are they to distribution shifts arising from natural and synthetic variations in datasets? Prior research on this problem has primarily focused on ImageNet variations (e.g., ImageNetV2, ImageNet-A). To avoid potential inherited biases in these studies, we take a different approach. Specifically, we reanalyze the ObjectNet dataset1 recently proposed by Barbu et al. containing objects in daily life situations. They showed a dramatic performance drop of the state of the art object recognition models on this dataset. Due to the importance and implications of their results regarding the generalization ability of deep models, we take a second look at their analysis. We find that applying deep models to the isolated objects, rather than the entire scene as is done in the original paper, results in around 20-30% performance improvement. Relative to the numbers reported in Barbu et al., around 10-15% of the performance loss is recovered, without any test time data augmentation. Despite this gain, however, we conclude that deep models still suffer drastically on the ObjectNet dataset. We also investigate the robustness of models against synthetic image perturbations such as geometric transformations (e.g., scale, rotation, translation), natural image distortions (e.g., impulse noise, blur) as well as adversarial attacks (e.g., FGSM and PGD-5). Our results indicate that limiting the object area as much as possible (i.e., from the entire image to the bounding box to the segmentation mask) leads to consistent improvement in accuracy and robustness. Finally, through a qualitative analysis of ObjectNet data, we find that i) a large number of images in this dataset are hard to recognize even for humans, and ii) easy (hard) samples for models match with easy (hard) samples for humans. Overall, our analyses show that ObjecNet is still a challenging test platform for evaluating the generalization ability of models. Code and data are available at https://github.com/aliborji/ObjectNetReanalysis.git.
1 INTRODUCTION
Object recognition3 can be said to be the most basic problem in vision sciences. It is required in the early stages of visual processing before a system, be it a human or a machine, can accomplish other tasks such as searching, navigating, or grasping. Application of a convolutional neural network architecture (CNN) known as LeNet (LeCun et al., 1998), albeit with new bells and whistles (Krizhevsky et al., 2012), revolutionized not only computer vision but also several other areas. With the initial excitement gradually damping, researchers have started to study the shortcomings of deep models and question their generalization ability. From prior research, we already know that CNNs: a) lack generalization to out of distribution samples (e.g., Recht et al. (2019); Barbu et al. (2019); Shankar et al. (2020); Taori et al. (2020); Koh et al. (2020)). Even after being exposed to many different instances of the same object category, they fail to fully capture the concept. In stark contrast, humans can generalize from only few examples (a.k.a few-shot learning), b) perform poorly when applied to transformed versions of the same object. In other words, they
1https://objectnet.dev/ 2See https://openreview.net/forum?id=Q4EUywJIkqr for reviews and discussions. A prelimnary version of this work has been published in Arxiv (Borji, 2020). 3Classification of an object appearing lonely in an image. For images containing multiple objects, object localization or detection is required first.
are not invariant to spatial transformations (e.g., translation, in-plane and in-depth rotation, scale) as shown in (Azulay & Weiss, 2019; Engstrom et al., 2019; Fawzi & Frossard, 2015), as well as noise corruptions (Hendrycks & Dietterich, 2019; Geirhos et al., 2018b), and c) are vulnerable to imperceptible adversarial image perturbations (Szegedy et al., 2013; Goodfellow et al., 2014; Nguyen et al., 2015). Majority of these works, however, have used either the ImageNet dataset or its variations, and thus might be biased towards ImageNet characteristics. Utilizing a very challenging dataset that has been proposed recently, known as ObjectNet (Barbu et al., 2019), here we seek to answer how well the state of the art CNNs generalize to real world object recognition scenarios. We also explore the role of spatial context in object recognition and answer whether it is better to use cropped objects (using bounding boxes) or segmented objects to achieve higher accuracy and robustness. Furthermore, we study the relationship between object recognition, scene understanding, and object detection. These are important problems that have been less explored.
Several datasets have been proposed for training and testing object recognition models, and to study their generalization ability (e.g., ImageNet by Deng et al. (2009), Places by Zhou et al. (2017), CIFAR by Krizhevsky et al. (2009), NORB by LeCun et al. (2004), and iLab20M by Borji et al. (2016)). As the most notable one, ImageNet dataset has been very instrumental for gauging the progress in object recognition over the past decade. A large number of studies have tested new ideas by training deep models on ImageNet (from scratch), or by finetuning pre-trained (on ImageNet) classification models on other datasets. With the ImageNet being retired, the state of the object recognition problem remains unclear. Several questions such as out of distribution generalization, “superhuman performance” (He et al., 2016) and invariance to transformations persist. To rekindle the discourse, recently Barbu et al. (2019) introduced the ObjectNet dataset which according to their claim has less bias than other recognition datasets4. This dataset is supposed to be used solely as a test set and comes with a licence that disallows the researchers to finetune models on it. Images are pictured by Mechanical Turk workers using a mobile app in a variety of backgrounds, rotations, and imaging viewpoints. ObjectNet contains 50,000 images across 313 categories, out of which 113 are in common with ImageNet categories. Astonishingly, Barbu et al. found that the state of the art object recognition models perform drastically lower on ObjectNet compared to their performance on ImageNet (about 40-45% drop). Our principal goal here it to revisit the Barbu et al.’s analysis and measure the actual performance drop on ObjectNet compared to ImageNet. To this end, we limit our analysis to the 113 overlapped categories between the two datasets. We first annotate the objects in the ObjectNet scenes by drawing boxes around them. We then apply a number of deep models on these object boxes and find that models perform significantly better now, compared to their performance on the entire scene (as is done in Barbu et. al). Interestingly, and perhaps against the common belief, we also find that training and testing models on segmented objects, rather than the object bounding box or the full image, leads to consistent improvement in accuracy and robustness over a range of classification tasks and image transformations (geometric, natural distortions, and adversarial attacks). Lastly, we provide a qualitative (and somewhat anecdotal) analysis of extreme cases in object recognition for humans and machines.
2 RELATED WORK
Robustness against synthetic distribution shifts. Most research on assessing model robustness has been focused on synthetic image perturbations (e.g., spatial transformations, noise corruptions, simulated weather artifacts, temporal changes (Gu et al., 2019), and adversarial examples) perhaps because it is easy to precisely define, implement, and apply them to arbitrary images. While models have improved significantly in robustness to these distribution shifts (e.g., Zhang (2019); Zhang et al. (2019); Cohen & Welling (2016)), they are still not as robust as humans. Geirhos et al. (2018b) showed that humans are more tolerant against image manipulations like contrast reduction, additive noise, or novel eidolon-distortions than models. Further, humans and models behave differently (witnessed by different error patterns) as the signal gets weaker. Zhu et al. (2016) contrast the influence of the foreground object and image background on the performance of humans and models.
Robustness against natural distribution shifts. Robustness on real data is a clear challenge for deep neural networks. Unlike synthetic distribution shifts, it is difficult to define distribution shifts that occur naturally in the real-world (such as subtle changes in scene composition, object types, and lighting conditions). Recht et al. (2019) closely followed the original ImageNet creation process
4ObjectNet dataset, however, has it own biases. It consists of indoor objects that are available to many people, are mobile, are not too large, too small, fragile or dangerous.
to build a new test set called ImageNetV2. They reported a performance gap of about 11% (top-1 acc.) between the performance of the best deep models on this dataset and the original test set. Similar observations have been made by Shankar et al. (2020). By evaluating 204 ImageNet models in 213 different test conditions, Taori et al. (2020) found that a) current synthetic robustness does not imply natural robustness. In other words, robustness measures for synthetic distribution shifts are weakly predictive of robustness on the natural distribution shifts, b) robustness measurements should control for accuracy since higher robustness can sometimes be explained by the higher accuracy on a standard unperturbed test set, and c) training models on larger and more diverse data improves robustness but does not lead to full closure of the performance gap. A comprehensive benchmark of distribution shifts in the wild, known as WILDS, has recently been published by Koh et al. (2020), encompassing different data modalities including vision. In D’Amour et al. (2020), authors regard “underspecification” a major challenge to the credibility and generalization of modern machine learning pipelines. An ML pipeline is underspecified when it returns models that perform very well on held-out test sets during training but perform poorly at deployment time.
Contextual interference. Context plays a significant role in pattern recognition and visual reasoning (e.g., Bar (2004); Torralba & Sinha (2001); Rabinovich et al. (2007); Heitz & Koller (2008); Galleguillos & Belongie (2010)). The extent to which visual context is being used by deep models is still unclear. Unlike models, humans are very good at exploiting context when it is helpful and discard it when it causes ambiguity. In other words, deep models do not understand what is the foreground object and what constitutes the background5. Nagarajan et al. (2020) mention that ML models utilize features (e.g., image background) which are spuriously correlated with the label during training. This makes them fragile at the test time when statistics slightly differ. As we argue here, this is one of the main reasons why deep models are so vulnerable to geometric and adversarial perturbations. Geirhos et al. (2020) have studied this phenomenon under the “shortcut learning” terminology from a broader perspective.
Insights from human vision. CNNs turn out to be good models of human vision and can explain the first feed-forward sweep of information (See Kriegeskorte (2015) for a review). They, however, differ from human visual processing in several important ways. Current object recognition methods do not rely on segmentation, whereas figure-ground segmentation plays a significant role in human vision, in particular for the encoding of spatial relations between 3D object parts (Biederman, 1987; Serre, 2019). Some computer vision works, predating deep learning, have also shown that pre-segmenting the image before applying the recognition algorithms, improves the accuracy (Malisiewicz & Efros, 2007; Rabinovich et al., 2007; Rosenfeld & Weinshall, 2011). Unlike the human vision system, CNNs are hindered drastically in crowded scenes (e.g., Volokitin et al. (2017)). CNNs rely more on texture whereas humans pay more attention to shape (Geirhos et al., 2018a). Utilizing minimal recognizable images, Ullman et al. (2016) argued that the human visual system uses features and processes that are not used by current deep models.
5As an example, consider a model that is trained to classify camels vs. cows, with camels always shown in sandy backgrounds and cows shown against grassy backgrounds. Although such a model does well during training, it gets confused when presented with cows in sandy backgrounds at test time (Beery et al., 2018). See also Rosenfeld et al. (2018) for another example in the context of object detection
O ur
a na
ly si
s
O bj
ec tN
et p
ap er
< Recognizers by year >
13.86
31.19
ObjectNet Top-5 (box) ObjectNet Top-1 (box)
49.84
61.50
39.48
27.89
25-35% performance drop A cc
ur ac
y %
A cc
ur ac
y %
40-45% performance drop
Using our code
3 EXPERIMENTS AND RESULTS
3.1 ACCURACY AND ROBUSTNESS AGAINST NATURAL DISTRIBUTION SHIFTS
A critic of Barbu et al. (2019). Barbu et al.’s work is a great contribution to the field to answer how well object recognition models generalize to the real-world circumstances and to control for biases in data collection. It, however, suffers from a major shortcoming that is making no distinction between “object detection” and “object recognition”. This confusion brings along several concerns:
1. They use the term “object detector” to refer to “object recognition” models. Object detection and object recognition are two distinct, yet related, tasks. Each one has its own models, datasets, evaluation measures, and inductive biases. For example, as shown in Fig. 1, images in object recognition datasets (e.g., ImageNet) often contain a single object, usually from a closeup view, whereas scenes in object detection datasets (e.g., MS COCO (Lin et al., 2014), OpenImages (Kuznetsova et al., 2018)) usually have multiple objects. Objects in the detection datasets vary more in some parameters such as occlusion and size. For instance, there is a larger variation in object scale in detection datasets (Singh & Davis, 2018). This discussion also relates to the distinction between “scene understanding” and “object recognition”. To understand a complex scene, as humans we look around, fixate on individual objects to recognize them, and accumulate information over fixations to perform more complex tasks such as answering a question or describing an event. To avoid biases in recognition datasets (e.g., typical scales or object views), we propose to (additionally) use detection datasets to study object recognition. We will discuss this further in Section 4.
2. Instead of applying models to isolated objects, Barbu et al. apply them to cluttered scenes containing multiple objects. Unlike ImageNet where the majority of images include only a single object, ObjectNet images have multiple objects in them and are often more cluttered. Therefore, the drop in performance of models on ObjectNet can be merely due to the fact that pretrained models on ImageNet have been trained on individual objects.
3. In addition to top-1 accuracy, Barbu et al. also report top-5 accuracy. One might argue that this may suffice in dealing with scenes containing multiple objects. Top-5 accuracy was first introduced in Russakovsky et al. (2015) to remedy the issues with the top-1 accuracy. The latter can be overly stringent by penalizing predictions that appear in the image but do not correspond to the target label. Top-5 accuracy itself, however, has two shortcomings. First, a model can still be penalized if all of the five guesses exist in the image, but none is the image label. Both scores fall short in addressing the images with counter-intuitive labels (e.g., when non-salient objects are labeled; Appx. E). Second, on fine-grained classification tasks (ImageNet has several fine-grained classes e.g., dogs), allowing five
predictions can make certain class distinctions trivial (Shankar et al., 2020). For example, there are five turtles in the ImageNet class hierarchy (mud turtle, box turtle, loggerhead turtle, leatherback turtle, and terrapin) that are difficult to distinguish. A classifier may trick the score by generating all of these labels for a turtle image to ensure it predicts the correct label. Shankar et al. proposed to use multi-label accuracy as an alternative to top-5 score. Each image has a set of target labels (i.e., multi-label annotations). A prediction is marked correct if it corresponds to any of the target labels for that image. This score, however, may favor a model that generates correct labels but may confuse the locations over a model that is spatially more precise but misses some objects (See also Beyer et al. (2020)). Regardless, since multi-label annotations for ObjectNet are not available, we report both top-1 and top-5 scores when feeding isolated objects to models.
Bounding box annotation. The 113 object categories in the ObjectNet dataset, overlapped with the ImageNet, contain 18,574 images in total. On this subset, the average number of images per category is 164.4 (min=55, max=284). Fig. 8 in Appx. A shows the distribution of the number of images per category on this dataset (envelope and dish drying rack are the most and least frequent objects, respectively). We drew a bounding box around the object corresponding to the category label of each image. If there were multiple nearby objects from the same category (e.g., chairs around a table), we tried to include all of them in the bounding box. Some example scenes and their corresponding bounding boxes are given in Fig. 1. Appx. H shows more stats on ObjectNet. Object recognition results. We employ six widely-used state of the art deep neural networks including AlexNet (Krizhevsky et al., 2012), VGG-19 (Simonyan & Zisserman, 2014), GoogLeNet (Szegedy et al., 2015), ResNet-152 (He et al., 2016), Inception-v3 (Szegedy et al., 2016)6, and MNASNet (Tan et al., 2019). AlexNet, VGG-19, and ResNet-152 have also been used in the ObjectNet paper (Barbu et al., 2019). We use the PyTorch implementation of these models7. Since the code from the ObjectNet paper is unavailable (at the time of preparing this work), in addition to applying models to bounding boxes and plotting the results on top of the results from the ObjectNet paper, we also run our code on both the bounding boxes and the full images. This allows a fair comparison and helps mitigate possible inconsistency in data processing methods (e.g., different data normalization schemes or test time data augmentation such as rotation, scale, color jittering, cropping, etc.).
Fig. 2 shows an overlay of our results in Fig. 1 from the ObjectNet paper. As can be seen, applying models to the object bounding box instead of the entire scene improves the accuracy about 10-15%. Although the gap is narrower now, models still significantly underperform on ObjectNet than the ImageNet dataset. Using our code, the improvement going from full image to bounding boxes is around 20-30% across all tested models (the right panel in Fig. 2). Our results using the full image are lower than Barbu et al.’s results using the full image (possibly because we do not utilize data augmentation). This relative difference entails that applying their code to bounding boxes will likely improve the performance beyond 10% that we obtained here. Assuming 25% gain in performance on top of their best results when using boxes, will still not close the performance gap which indicates that ObjectNet remains a challenging dataset for testing object recognition models.
Breakdown of accuracy over the 113 categories is shown in Appx. B (Figs. 9 & 10 over isolated objects and Figs. 11 & 12 over the full image). Interestingly, in both cases, almost all models, except GoogLeNet on isolated objects and AlexNet on the full image, perform the best over the safety pin category. Inspecting the images from this class, we found that they have a single safety pin often held by a person (perhaps about the same distance from the camera thus similar scales). The same story is true about the banana class which is the second easiest category using the bounding boxes. This object becomes much harder to recognize when using the full image (26.88% vs. 70.3% using boxes) which highlights the benefit of applying models to isolated objects rather than scenes.
3.2 ACCURACY AND ROBUSTNESS AGAINST SYNTHETIC DISTRIBUTION SHIFTS
3.2.1 ROBUSTNESS AGAINST COMMON IMAGE CORRUPTIONS
Previous work has shown that ImageNet-trained CNNs generalize poorly over a wide range of image distortions (e.g., Hendrycks & Dietterich (2019); Azulay & Weiss (2019); Dodge & Karam (2017)). These works, however, have applied CNNs to the whole scene. Here, we
6Barbu et al. have used Inception-v4. 7https://pytorch.org/docs/stable/torchvision/models.html
ask whether applying the models to the bounding boxes can improve robustness against image distortions. Following Hendrycks & Dietterich (2019), we systematically test how model accuracy degrades if images are corrupted by 14 different types of distortions including Gaussian noise, shot noise, impulse noise, defocus blur, glass blur, motion blur, zoom blur, snow, frost, fog, brightness, contrast, elastic transform, and JPEG compression at 3 levels of corruption severity. Fig. 36 (Appx. F) shows sample images along with their distortions. Ten images from each of the 113 categories of ObjectNet (1130 images in total) were fed to three models including VGG-19, Inception-v3, and ResNet-152.
Aggregate results over the full image and the object bounding box (both resized to 224 × 224 pixels) are shown in Fig. 3. All three models are more robust when applied to the object bounding box than the full image at all corruption levels, using both top-1 and top-5 scores (left two panels). Among models, ResNet-152 performs better and is the most robust model. It is followed by the Inception-v3 model. For nearly all of the 113 object categories, using bounding boxes leads to higher robustness than using the full image (the third panel). Similarly, using bounding boxes results in higher robustness against all distortion types (the right-most panel). Across distortion types, shown in Figs. 37 & 38 (Appx. F), ResNet-152 consistently outperforms the other two models at all severity levels, followed by Inception-v3. It seems that models are hindered more by impulse noise, frost, zoom blur, and snow distortions. The top-1 accuracy at severity level 2 on these distortions is below 20%. Overall, we conclude that limiting the object area only to the bounding box leads not only to higher prediction accuracy but also to higher robustness against image distortions. Extrapolating this approach, can we improve robustness by shrinking the object region even further by using the segmentation masks? We will thoroughly investigate this question in the next subsections.
3.2.2 ROBUSTNESS AGAINST ADVERSARIAL PERTURBATIONS
Despite being very accurate, CNNs are highly vulnerable to adversarial inputs (Szegedy et al., 2013; Goodfellow et al., 2014). These inputs are crafted carefully and maliciously by adding small imperceptible perturbations to them (e.g., altering the value of a pixel up to 8 units under the `∞-norm; pixels in the range [0, 255]). Here we apply the ImageNet pretrained models to 1130 images that were selected above. The models are tested against the Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2014) at two perturbation budgets in the untargeted white-box setting.
Table. 1 shows the results. We find that models are more resilient against the FGSM attack when applied to the bounding box than the full image. While the input size is the same in both cases (224×224), the adversary has more opportunity to mislead the
classifier in the full image case since a larger fraction of pixels play an insignificant role in the decisions made by the network. This aligns with observations from the visualization tools (e.g., Selvaraju et al. (2017)) revealing that CNNs indeed rely only on a small subset of image pixels to elicit a decision. One might argue that the lower robustness on the full images could be due to training and test discrepancy (i.e., training models on single objects and applying them to the entire scene). To address this, in the next subsection we train and test models in the same condition.
3.3 THE INFLUENCE OF THE SURROUNDING CONTEXT ON ROBUSTNESS
Despite a large body of literature on whether and how much visual context benefits CNNs in terms of accuracy and robustness8, the matter has not been settled yet (e.g., Bar (2004); Torralba & Sinha (2001); Rabinovich et al. (2007); Rosenfeld et al. (2018); Heitz & Koller (2008); Divvala et al. (2009); Zhu et al. (2016); Xiao et al. (2020); Malisiewicz & Efros (2007)). To study how context surrounding an object impacts model accuracy and robustness in more detail, we conducted
two experiments. In the first one, we trained two CNNs (2 conv layers, each followed by a pooling layer and 2 final fc layers) on MNIST and Fashion MNIST datasets, for which it is easy to derive the foreground masks (Figs. 39 & 40; Appx. G). CNNs were trained on either the original clean images or the foreground objects placed on a white noise background. We then tested the models against the FGSM attack w/o background subtraction. With background subtraction, we essentially assume that the adversary has access only to the foreground object (i.e., effectively removing the perturbations that fall on the background). As results in Fig. 4 show, background subtraction improves the robustness substantially using both models and over both datasets.
To examine whether the above conclusion generalizes to more complex natural scenes, we ran a second experiment. First, we selected images from ten classes of the MS COCO dataset including chair, car, book, bottle, dinning table, umbrella, boat, motorcycle, sheep, and cow. Objects from these classes come with a segmentation mask (one object per image; 100 images per category; 1000 images in total). Around 32.7% of the image pixels fall inside the object bounding box and around 58.1% of the bounding box pixels fall inside the object mask. Fig. 5 shows a sample chair alongside its bounding box and its segmentation mask.
We then trained three ResNet-18 models (finetuned on ImageNet), one per each input type: 1) full image, 2) bounding box, and 3) segmented object (placed in a dark background). Models were trained on 70 images per category (700 in total) for 10 epochs and were then tested on the remaining 30 images per category. An attempt was made to tune the parameters to attain the best test accuracy in each case (e.g., by avoiding overfitting). The test accuracy9 of models in order are 66.9%, 78%, and 80.3%. One reason
behind lower prediction accuracy using boxes might be because multiple objects may fit inside
8Majority of such works are focused on model accuracy 9 Taori et al. (2020) argue that robustness scores should control for accuracy as more predictive models in
general are more robust. To avoid this issue we used models that have about the same standard accuracy.
the bounding box (e.g., for elongated objects such as broom). Model performance against FGSM and `∞ PGD-5 (Projected Gradient Descent by Madry et al. (2017)) adversarial attacks are shown in Fig. 5 (left panel). We observe that training models on segmented objects leads to higher adversarial robustness against both types of attacks. The improvement is more pronounced at higher perturbations. We also considered a condition in which we masked the perturbations that fall on the background, denoted as “Seg. Mask + FG” in the figure. We noticed even higher robustness against the attacks by removing the background perturbations. These results encourage using foreground detection as an effective adversarial defense.
The middle panel in Fig. 5 shows model robustness against noise corruptions (averaged over the 14 distortions used in Section 3.2.1). Here again, we find that using segmentation masks leads to higher robustness compared to the full image and object boxes. “Seg. Mask + FG” leads to the best robustness among the input types. While it might be hard to draw a general conclusion regarding the superiority of the segmentation masks over bounding boxes in object recognition accuracy, our investigation suggests that using masks leads to a significant boost in adversarial robustness with little or no drop in standard accuracy. Our results offer an upper bound in the utility of segmentation masks in robustness. More work is needed to incorporate this feat in CNNs (i.e., using attention).
3.3.1 ROBUSTNESS AGAINST GEOMETRIC TRANSFORMATIONS
We also tested the ResNet-18 model (i.e., trained over the full image, the bounding box, and the segmented object on ObjectNet; as above) against three geometric transformations including scaling, in-plane rotation, and horizontal translation. Fig. 6 shows the results over the 300 test images that were used in the previous subsection. We find that the model trained on segmentation masks is more robust than the other two models over all three geometric transformations, followed by the models trained on the object bounding boxes and the full image, in order.
3.4 QUALITATIVE INSPECTION OF OBJECTNET IMAGES AND ANNOTATIONS
During the annotation of ObjectNet images, we came across the following observations: a) Some objects look very different when they are in motion (e.g., the fan in row 4 of Fig. 34 in Appx. D), or when they are shadowed or occluded by other objects (e.g., the hammer in Fig. 34 row 4), b) Some object instances differ a lot from the typical instances in the same class (e.g., the helmet in Fig. 34 row 5; the orange in Fig. 33 row 5), c) Some objects can be recognized only through reading their captions (e.g., the pet food container in Fig. 33 row 2), d) Some images have wrong labels (e.g., the pillow in Fig. 33 row 2; the skirt in Fig. 33 row 1; the tray in Fig. 34 row 2; See also Appx. E), e) Some objects are extremely difficult for humans (e.g., the tennis racket in Fig. 34 row 4; the shovel in Fig. 33 row 4; the tray in Fig. 33 row 1), f) In many images, objects are occluded by hands holding them (e.g., the sock and the shovel in Fig. 33 row 4), g) Some objects are hard to recognize in dim light (e.g., the printer in Fig. 33 row 2), and h) Some categories are often confused with other categories in the same set. Example sets include {bath towel, bed sheet, full sized towel, dishrag or hand towel}, {sandal, dress shoe (men), running shoe}, {t-shirt, dress, sweater, suit jacket, skirt}, {ruler, spatula, pen, match}, {padlock, combination lock}, and {envelope, letter}. The left panel in Fig. 7 shows four easy (highly confident correct predictions) and four hard (highly confident misclassifications) for ResNet-152 over six ObjectNet categories. In terms of the difficulty level, easy(difficult) objects for models appear easy(difficult) to humans too. Also, our qualitative inspection shows that ObjectNet includes a large number of objects that can be recognized only after a careful examination (the right panel in Fig. 7). More examples are given in Appx. C.
4 TAKEAWAYS AND DISCUSSION
Our investigation reveals that deep models perform significantly better when applied to isolated objects rather than the entire scene. The reason behind this is two-fold. First, there is less variability in single objects compared to scenes containing multiple objects. Second, deep models (used here and also in ObjectNet paper) have been trained on ImageNet images which are less cluttered compared to the ObjectNet images. We anticipate that training models from scratch on large scale datasets that contain isolated objects will likely result in even higher accuracy. Assuming around 30% increase in performance (at best) on top of Barbu et al.’s results using bounding boxes still leaves a large gap of at least 15% between ImageNet and ObjectNet which means that ObjectNet is indeed much harder. It covers a wider range of variations than ImageNet including object instances, viewpoints, rotations, occlusions, etc which pushes the limits of object recognition models. Hence, despite its limitations and biases, ObjectNet dataset remains a great platform to test deep models in realistic situations.
We envision four research directions for the future work in this area. First, background subtraction is a promising mechanism and should be investigated further over large scale datasets (given the availability of high-resolution masks; e.g., MS COCO). We found that it improves robustness substantially over various types of image perturbations and attacks. Humans can discern the foreground object from the image background with high precision. This feat might be the key to robustness and hints towards an interplay and feedback loop between recognition and segmentation that is currently missing in CNNs. Second, measuring human performance on ObjectNet will provide a useful baseline for gauging model performance. Barbu et. al report an accuracy of around 95% when they asked subjects to mention the objects that are present in the scene. This task, however, is different from recognizing isolated objects similar to the regime that was considered here (i.e., akin to rapid scene categorization tasks; See Serre et al. (2007)). Besides, error patterns of models and humans (e.g., Borji & Itti (2014)), in addition to crude accuracy measures, will inform us about the differences in object recognition mechanisms between humans and machines. It could be that models work in a completely different fashion than the human visual system. Third, as discussed in Section 3.1, multi-label prediction accuracy is more appropriate for evaluating recognition models. Annotating all objects in ObjectNet images will thus provide an additional dimension to assess models. In this regard, we propose a new task where the goal is to recognize objects in their natural contexts. This task resembles (cropped) object recognition and object detection, but it is slightly different (i.e., the goal here is to recognize an object limited by a bounding box given all available information in the scene). This is essentially an argument against the recognition-detection dichotomy. Finally, it would be interesting to see how well the state of the art object detectors perform on the ObjectNet dataset (e.g., over overlapped classes between ObjectNet and MS COCO (Lin et al., 2014)). We expect a significant drop in detection performance since it is hard to recognize objects in this dataset.
From a broader perspective, our study reinforces the idea that there is more to scene understanding then merely learning statistical correlations. In particular, background subtraction and visual context are crucial in robust recognition and demand further investigation in future studies.
A FREQUENCY OF THE IMAGES PER CATEGORY
B MODEL ACCURACY PER CATEGORY USING BOXES VS. FULL IMAGE
C EASIEST AND HARDEST OBJECTS FOR THE RESNET-152 MODEL
D SOME CHALLENGING EXAMPLES FOR HUMANS
E ANNOTATION ISSUES IN OBJECT RECOGNITION DATASETS
F ANALYSING MODEL ROBUSTNESS OVER NATURALLY DISTORTED IMAGES
G ADVERSARIAL DEFENSE USING FOREGROUND DETECTION ON MNIST AND FASHION MNIST
H STATISTICS OF OBJECTNET DATASET | 1. What is the main contribution of the paper regarding object recognition?
2. What are the strengths of the proposed approach, particularly in terms of its novelty and significance?
3. What are the weaknesses of the paper, especially regarding its clarity and organization?
4. Do you have any concerns about the experimental design and analysis?
5. How does the reviewer assess the overall quality and impact of the paper? | Review | Review
I like the main ideas articulated in the paper, but find the writing lacks some clarity:
Summary of paper: The paper takes as a starting point the study from Barbu et al where the robustness of object recognition pipelines to be able to handle distribution shifts are studied by testing ImageNet trained architectures against ObjectNet. The main point in the current paper is that the performance degradation seen in Barbu et al is due to the fact that the CNNs were processing the image with entire image as context and when one only provides a sub-window around the objects of interest the resulting performance improves significantly. The paper also describes experiments with various synthetic distorted data and finally examines details of ObjectNet dataset to illustrate that there are images that are hard to categorize even for humans. Thus, the paper concludes that object recognition on ObjectNet is still hard to solve.
It is clear that by using bounding boxes or even removing background from those bounding boxes, the performance will be better (since the training was on ImageNet with single objects). So, in a way, they are kind-of recreating the training distribution in order to improve the performance.
Main significance of the paper (Pros): Detailed study of performance of object recognition and the empirical finding that figure-ground segmentation may improve recognition. Analysis of properties of ObjectNet and its challenges.
Originality/Novelty: The paper is largely empirical and has a good discussion of the relevant background literature analyzing object recognition systems.
Cons: It has incremental insights.
Clarity of paper: The description of the experiments are at times unclear. The structure of the paper could be simplified with a table or diagram that illustrates the logic behind the experimentation and conclusion. There are multiple datasets used, the training scheme is sometimes on ImageNet and tested on ObjectNet and sometimes on selected categories of ObjectNet and test on the rest of ObjectNet. |
ICLR | Title
Contemplating Real-World Object Classification
Abstract
Deep object recognition models have been very successful over benchmark datasets such as ImageNet. How accurate and robust are they to distribution shifts arising from natural and synthetic variations in datasets? Prior research on this problem has primarily focused on ImageNet variations (e.g., ImageNetV2, ImageNet-A). To avoid potential inherited biases in these studies, we take a different approach. Specifically, we reanalyze the ObjectNet dataset1 recently proposed by Barbu et al. containing objects in daily life situations. They showed a dramatic performance drop of the state of the art object recognition models on this dataset. Due to the importance and implications of their results regarding the generalization ability of deep models, we take a second look at their analysis. We find that applying deep models to the isolated objects, rather than the entire scene as is done in the original paper, results in around 20-30% performance improvement. Relative to the numbers reported in Barbu et al., around 10-15% of the performance loss is recovered, without any test time data augmentation. Despite this gain, however, we conclude that deep models still suffer drastically on the ObjectNet dataset. We also investigate the robustness of models against synthetic image perturbations such as geometric transformations (e.g., scale, rotation, translation), natural image distortions (e.g., impulse noise, blur) as well as adversarial attacks (e.g., FGSM and PGD-5). Our results indicate that limiting the object area as much as possible (i.e., from the entire image to the bounding box to the segmentation mask) leads to consistent improvement in accuracy and robustness. Finally, through a qualitative analysis of ObjectNet data, we find that i) a large number of images in this dataset are hard to recognize even for humans, and ii) easy (hard) samples for models match with easy (hard) samples for humans. Overall, our analyses show that ObjecNet is still a challenging test platform for evaluating the generalization ability of models. Code and data are available at https://github.com/aliborji/ObjectNetReanalysis.git.
1 INTRODUCTION
Object recognition3 can be said to be the most basic problem in vision sciences. It is required in the early stages of visual processing before a system, be it a human or a machine, can accomplish other tasks such as searching, navigating, or grasping. Application of a convolutional neural network architecture (CNN) known as LeNet (LeCun et al., 1998), albeit with new bells and whistles (Krizhevsky et al., 2012), revolutionized not only computer vision but also several other areas. With the initial excitement gradually damping, researchers have started to study the shortcomings of deep models and question their generalization ability. From prior research, we already know that CNNs: a) lack generalization to out of distribution samples (e.g., Recht et al. (2019); Barbu et al. (2019); Shankar et al. (2020); Taori et al. (2020); Koh et al. (2020)). Even after being exposed to many different instances of the same object category, they fail to fully capture the concept. In stark contrast, humans can generalize from only few examples (a.k.a few-shot learning), b) perform poorly when applied to transformed versions of the same object. In other words, they
1https://objectnet.dev/ 2See https://openreview.net/forum?id=Q4EUywJIkqr for reviews and discussions. A prelimnary version of this work has been published in Arxiv (Borji, 2020). 3Classification of an object appearing lonely in an image. For images containing multiple objects, object localization or detection is required first.
are not invariant to spatial transformations (e.g., translation, in-plane and in-depth rotation, scale) as shown in (Azulay & Weiss, 2019; Engstrom et al., 2019; Fawzi & Frossard, 2015), as well as noise corruptions (Hendrycks & Dietterich, 2019; Geirhos et al., 2018b), and c) are vulnerable to imperceptible adversarial image perturbations (Szegedy et al., 2013; Goodfellow et al., 2014; Nguyen et al., 2015). Majority of these works, however, have used either the ImageNet dataset or its variations, and thus might be biased towards ImageNet characteristics. Utilizing a very challenging dataset that has been proposed recently, known as ObjectNet (Barbu et al., 2019), here we seek to answer how well the state of the art CNNs generalize to real world object recognition scenarios. We also explore the role of spatial context in object recognition and answer whether it is better to use cropped objects (using bounding boxes) or segmented objects to achieve higher accuracy and robustness. Furthermore, we study the relationship between object recognition, scene understanding, and object detection. These are important problems that have been less explored.
Several datasets have been proposed for training and testing object recognition models, and to study their generalization ability (e.g., ImageNet by Deng et al. (2009), Places by Zhou et al. (2017), CIFAR by Krizhevsky et al. (2009), NORB by LeCun et al. (2004), and iLab20M by Borji et al. (2016)). As the most notable one, ImageNet dataset has been very instrumental for gauging the progress in object recognition over the past decade. A large number of studies have tested new ideas by training deep models on ImageNet (from scratch), or by finetuning pre-trained (on ImageNet) classification models on other datasets. With the ImageNet being retired, the state of the object recognition problem remains unclear. Several questions such as out of distribution generalization, “superhuman performance” (He et al., 2016) and invariance to transformations persist. To rekindle the discourse, recently Barbu et al. (2019) introduced the ObjectNet dataset which according to their claim has less bias than other recognition datasets4. This dataset is supposed to be used solely as a test set and comes with a licence that disallows the researchers to finetune models on it. Images are pictured by Mechanical Turk workers using a mobile app in a variety of backgrounds, rotations, and imaging viewpoints. ObjectNet contains 50,000 images across 313 categories, out of which 113 are in common with ImageNet categories. Astonishingly, Barbu et al. found that the state of the art object recognition models perform drastically lower on ObjectNet compared to their performance on ImageNet (about 40-45% drop). Our principal goal here it to revisit the Barbu et al.’s analysis and measure the actual performance drop on ObjectNet compared to ImageNet. To this end, we limit our analysis to the 113 overlapped categories between the two datasets. We first annotate the objects in the ObjectNet scenes by drawing boxes around them. We then apply a number of deep models on these object boxes and find that models perform significantly better now, compared to their performance on the entire scene (as is done in Barbu et. al). Interestingly, and perhaps against the common belief, we also find that training and testing models on segmented objects, rather than the object bounding box or the full image, leads to consistent improvement in accuracy and robustness over a range of classification tasks and image transformations (geometric, natural distortions, and adversarial attacks). Lastly, we provide a qualitative (and somewhat anecdotal) analysis of extreme cases in object recognition for humans and machines.
2 RELATED WORK
Robustness against synthetic distribution shifts. Most research on assessing model robustness has been focused on synthetic image perturbations (e.g., spatial transformations, noise corruptions, simulated weather artifacts, temporal changes (Gu et al., 2019), and adversarial examples) perhaps because it is easy to precisely define, implement, and apply them to arbitrary images. While models have improved significantly in robustness to these distribution shifts (e.g., Zhang (2019); Zhang et al. (2019); Cohen & Welling (2016)), they are still not as robust as humans. Geirhos et al. (2018b) showed that humans are more tolerant against image manipulations like contrast reduction, additive noise, or novel eidolon-distortions than models. Further, humans and models behave differently (witnessed by different error patterns) as the signal gets weaker. Zhu et al. (2016) contrast the influence of the foreground object and image background on the performance of humans and models.
Robustness against natural distribution shifts. Robustness on real data is a clear challenge for deep neural networks. Unlike synthetic distribution shifts, it is difficult to define distribution shifts that occur naturally in the real-world (such as subtle changes in scene composition, object types, and lighting conditions). Recht et al. (2019) closely followed the original ImageNet creation process
4ObjectNet dataset, however, has it own biases. It consists of indoor objects that are available to many people, are mobile, are not too large, too small, fragile or dangerous.
to build a new test set called ImageNetV2. They reported a performance gap of about 11% (top-1 acc.) between the performance of the best deep models on this dataset and the original test set. Similar observations have been made by Shankar et al. (2020). By evaluating 204 ImageNet models in 213 different test conditions, Taori et al. (2020) found that a) current synthetic robustness does not imply natural robustness. In other words, robustness measures for synthetic distribution shifts are weakly predictive of robustness on the natural distribution shifts, b) robustness measurements should control for accuracy since higher robustness can sometimes be explained by the higher accuracy on a standard unperturbed test set, and c) training models on larger and more diverse data improves robustness but does not lead to full closure of the performance gap. A comprehensive benchmark of distribution shifts in the wild, known as WILDS, has recently been published by Koh et al. (2020), encompassing different data modalities including vision. In D’Amour et al. (2020), authors regard “underspecification” a major challenge to the credibility and generalization of modern machine learning pipelines. An ML pipeline is underspecified when it returns models that perform very well on held-out test sets during training but perform poorly at deployment time.
Contextual interference. Context plays a significant role in pattern recognition and visual reasoning (e.g., Bar (2004); Torralba & Sinha (2001); Rabinovich et al. (2007); Heitz & Koller (2008); Galleguillos & Belongie (2010)). The extent to which visual context is being used by deep models is still unclear. Unlike models, humans are very good at exploiting context when it is helpful and discard it when it causes ambiguity. In other words, deep models do not understand what is the foreground object and what constitutes the background5. Nagarajan et al. (2020) mention that ML models utilize features (e.g., image background) which are spuriously correlated with the label during training. This makes them fragile at the test time when statistics slightly differ. As we argue here, this is one of the main reasons why deep models are so vulnerable to geometric and adversarial perturbations. Geirhos et al. (2020) have studied this phenomenon under the “shortcut learning” terminology from a broader perspective.
Insights from human vision. CNNs turn out to be good models of human vision and can explain the first feed-forward sweep of information (See Kriegeskorte (2015) for a review). They, however, differ from human visual processing in several important ways. Current object recognition methods do not rely on segmentation, whereas figure-ground segmentation plays a significant role in human vision, in particular for the encoding of spatial relations between 3D object parts (Biederman, 1987; Serre, 2019). Some computer vision works, predating deep learning, have also shown that pre-segmenting the image before applying the recognition algorithms, improves the accuracy (Malisiewicz & Efros, 2007; Rabinovich et al., 2007; Rosenfeld & Weinshall, 2011). Unlike the human vision system, CNNs are hindered drastically in crowded scenes (e.g., Volokitin et al. (2017)). CNNs rely more on texture whereas humans pay more attention to shape (Geirhos et al., 2018a). Utilizing minimal recognizable images, Ullman et al. (2016) argued that the human visual system uses features and processes that are not used by current deep models.
5As an example, consider a model that is trained to classify camels vs. cows, with camels always shown in sandy backgrounds and cows shown against grassy backgrounds. Although such a model does well during training, it gets confused when presented with cows in sandy backgrounds at test time (Beery et al., 2018). See also Rosenfeld et al. (2018) for another example in the context of object detection
O ur
a na
ly si
s
O bj
ec tN
et p
ap er
< Recognizers by year >
13.86
31.19
ObjectNet Top-5 (box) ObjectNet Top-1 (box)
49.84
61.50
39.48
27.89
25-35% performance drop A cc
ur ac
y %
A cc
ur ac
y %
40-45% performance drop
Using our code
3 EXPERIMENTS AND RESULTS
3.1 ACCURACY AND ROBUSTNESS AGAINST NATURAL DISTRIBUTION SHIFTS
A critic of Barbu et al. (2019). Barbu et al.’s work is a great contribution to the field to answer how well object recognition models generalize to the real-world circumstances and to control for biases in data collection. It, however, suffers from a major shortcoming that is making no distinction between “object detection” and “object recognition”. This confusion brings along several concerns:
1. They use the term “object detector” to refer to “object recognition” models. Object detection and object recognition are two distinct, yet related, tasks. Each one has its own models, datasets, evaluation measures, and inductive biases. For example, as shown in Fig. 1, images in object recognition datasets (e.g., ImageNet) often contain a single object, usually from a closeup view, whereas scenes in object detection datasets (e.g., MS COCO (Lin et al., 2014), OpenImages (Kuznetsova et al., 2018)) usually have multiple objects. Objects in the detection datasets vary more in some parameters such as occlusion and size. For instance, there is a larger variation in object scale in detection datasets (Singh & Davis, 2018). This discussion also relates to the distinction between “scene understanding” and “object recognition”. To understand a complex scene, as humans we look around, fixate on individual objects to recognize them, and accumulate information over fixations to perform more complex tasks such as answering a question or describing an event. To avoid biases in recognition datasets (e.g., typical scales or object views), we propose to (additionally) use detection datasets to study object recognition. We will discuss this further in Section 4.
2. Instead of applying models to isolated objects, Barbu et al. apply them to cluttered scenes containing multiple objects. Unlike ImageNet where the majority of images include only a single object, ObjectNet images have multiple objects in them and are often more cluttered. Therefore, the drop in performance of models on ObjectNet can be merely due to the fact that pretrained models on ImageNet have been trained on individual objects.
3. In addition to top-1 accuracy, Barbu et al. also report top-5 accuracy. One might argue that this may suffice in dealing with scenes containing multiple objects. Top-5 accuracy was first introduced in Russakovsky et al. (2015) to remedy the issues with the top-1 accuracy. The latter can be overly stringent by penalizing predictions that appear in the image but do not correspond to the target label. Top-5 accuracy itself, however, has two shortcomings. First, a model can still be penalized if all of the five guesses exist in the image, but none is the image label. Both scores fall short in addressing the images with counter-intuitive labels (e.g., when non-salient objects are labeled; Appx. E). Second, on fine-grained classification tasks (ImageNet has several fine-grained classes e.g., dogs), allowing five
predictions can make certain class distinctions trivial (Shankar et al., 2020). For example, there are five turtles in the ImageNet class hierarchy (mud turtle, box turtle, loggerhead turtle, leatherback turtle, and terrapin) that are difficult to distinguish. A classifier may trick the score by generating all of these labels for a turtle image to ensure it predicts the correct label. Shankar et al. proposed to use multi-label accuracy as an alternative to top-5 score. Each image has a set of target labels (i.e., multi-label annotations). A prediction is marked correct if it corresponds to any of the target labels for that image. This score, however, may favor a model that generates correct labels but may confuse the locations over a model that is spatially more precise but misses some objects (See also Beyer et al. (2020)). Regardless, since multi-label annotations for ObjectNet are not available, we report both top-1 and top-5 scores when feeding isolated objects to models.
Bounding box annotation. The 113 object categories in the ObjectNet dataset, overlapped with the ImageNet, contain 18,574 images in total. On this subset, the average number of images per category is 164.4 (min=55, max=284). Fig. 8 in Appx. A shows the distribution of the number of images per category on this dataset (envelope and dish drying rack are the most and least frequent objects, respectively). We drew a bounding box around the object corresponding to the category label of each image. If there were multiple nearby objects from the same category (e.g., chairs around a table), we tried to include all of them in the bounding box. Some example scenes and their corresponding bounding boxes are given in Fig. 1. Appx. H shows more stats on ObjectNet. Object recognition results. We employ six widely-used state of the art deep neural networks including AlexNet (Krizhevsky et al., 2012), VGG-19 (Simonyan & Zisserman, 2014), GoogLeNet (Szegedy et al., 2015), ResNet-152 (He et al., 2016), Inception-v3 (Szegedy et al., 2016)6, and MNASNet (Tan et al., 2019). AlexNet, VGG-19, and ResNet-152 have also been used in the ObjectNet paper (Barbu et al., 2019). We use the PyTorch implementation of these models7. Since the code from the ObjectNet paper is unavailable (at the time of preparing this work), in addition to applying models to bounding boxes and plotting the results on top of the results from the ObjectNet paper, we also run our code on both the bounding boxes and the full images. This allows a fair comparison and helps mitigate possible inconsistency in data processing methods (e.g., different data normalization schemes or test time data augmentation such as rotation, scale, color jittering, cropping, etc.).
Fig. 2 shows an overlay of our results in Fig. 1 from the ObjectNet paper. As can be seen, applying models to the object bounding box instead of the entire scene improves the accuracy about 10-15%. Although the gap is narrower now, models still significantly underperform on ObjectNet than the ImageNet dataset. Using our code, the improvement going from full image to bounding boxes is around 20-30% across all tested models (the right panel in Fig. 2). Our results using the full image are lower than Barbu et al.’s results using the full image (possibly because we do not utilize data augmentation). This relative difference entails that applying their code to bounding boxes will likely improve the performance beyond 10% that we obtained here. Assuming 25% gain in performance on top of their best results when using boxes, will still not close the performance gap which indicates that ObjectNet remains a challenging dataset for testing object recognition models.
Breakdown of accuracy over the 113 categories is shown in Appx. B (Figs. 9 & 10 over isolated objects and Figs. 11 & 12 over the full image). Interestingly, in both cases, almost all models, except GoogLeNet on isolated objects and AlexNet on the full image, perform the best over the safety pin category. Inspecting the images from this class, we found that they have a single safety pin often held by a person (perhaps about the same distance from the camera thus similar scales). The same story is true about the banana class which is the second easiest category using the bounding boxes. This object becomes much harder to recognize when using the full image (26.88% vs. 70.3% using boxes) which highlights the benefit of applying models to isolated objects rather than scenes.
3.2 ACCURACY AND ROBUSTNESS AGAINST SYNTHETIC DISTRIBUTION SHIFTS
3.2.1 ROBUSTNESS AGAINST COMMON IMAGE CORRUPTIONS
Previous work has shown that ImageNet-trained CNNs generalize poorly over a wide range of image distortions (e.g., Hendrycks & Dietterich (2019); Azulay & Weiss (2019); Dodge & Karam (2017)). These works, however, have applied CNNs to the whole scene. Here, we
6Barbu et al. have used Inception-v4. 7https://pytorch.org/docs/stable/torchvision/models.html
ask whether applying the models to the bounding boxes can improve robustness against image distortions. Following Hendrycks & Dietterich (2019), we systematically test how model accuracy degrades if images are corrupted by 14 different types of distortions including Gaussian noise, shot noise, impulse noise, defocus blur, glass blur, motion blur, zoom blur, snow, frost, fog, brightness, contrast, elastic transform, and JPEG compression at 3 levels of corruption severity. Fig. 36 (Appx. F) shows sample images along with their distortions. Ten images from each of the 113 categories of ObjectNet (1130 images in total) were fed to three models including VGG-19, Inception-v3, and ResNet-152.
Aggregate results over the full image and the object bounding box (both resized to 224 × 224 pixels) are shown in Fig. 3. All three models are more robust when applied to the object bounding box than the full image at all corruption levels, using both top-1 and top-5 scores (left two panels). Among models, ResNet-152 performs better and is the most robust model. It is followed by the Inception-v3 model. For nearly all of the 113 object categories, using bounding boxes leads to higher robustness than using the full image (the third panel). Similarly, using bounding boxes results in higher robustness against all distortion types (the right-most panel). Across distortion types, shown in Figs. 37 & 38 (Appx. F), ResNet-152 consistently outperforms the other two models at all severity levels, followed by Inception-v3. It seems that models are hindered more by impulse noise, frost, zoom blur, and snow distortions. The top-1 accuracy at severity level 2 on these distortions is below 20%. Overall, we conclude that limiting the object area only to the bounding box leads not only to higher prediction accuracy but also to higher robustness against image distortions. Extrapolating this approach, can we improve robustness by shrinking the object region even further by using the segmentation masks? We will thoroughly investigate this question in the next subsections.
3.2.2 ROBUSTNESS AGAINST ADVERSARIAL PERTURBATIONS
Despite being very accurate, CNNs are highly vulnerable to adversarial inputs (Szegedy et al., 2013; Goodfellow et al., 2014). These inputs are crafted carefully and maliciously by adding small imperceptible perturbations to them (e.g., altering the value of a pixel up to 8 units under the `∞-norm; pixels in the range [0, 255]). Here we apply the ImageNet pretrained models to 1130 images that were selected above. The models are tested against the Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2014) at two perturbation budgets in the untargeted white-box setting.
Table. 1 shows the results. We find that models are more resilient against the FGSM attack when applied to the bounding box than the full image. While the input size is the same in both cases (224×224), the adversary has more opportunity to mislead the
classifier in the full image case since a larger fraction of pixels play an insignificant role in the decisions made by the network. This aligns with observations from the visualization tools (e.g., Selvaraju et al. (2017)) revealing that CNNs indeed rely only on a small subset of image pixels to elicit a decision. One might argue that the lower robustness on the full images could be due to training and test discrepancy (i.e., training models on single objects and applying them to the entire scene). To address this, in the next subsection we train and test models in the same condition.
3.3 THE INFLUENCE OF THE SURROUNDING CONTEXT ON ROBUSTNESS
Despite a large body of literature on whether and how much visual context benefits CNNs in terms of accuracy and robustness8, the matter has not been settled yet (e.g., Bar (2004); Torralba & Sinha (2001); Rabinovich et al. (2007); Rosenfeld et al. (2018); Heitz & Koller (2008); Divvala et al. (2009); Zhu et al. (2016); Xiao et al. (2020); Malisiewicz & Efros (2007)). To study how context surrounding an object impacts model accuracy and robustness in more detail, we conducted
two experiments. In the first one, we trained two CNNs (2 conv layers, each followed by a pooling layer and 2 final fc layers) on MNIST and Fashion MNIST datasets, for which it is easy to derive the foreground masks (Figs. 39 & 40; Appx. G). CNNs were trained on either the original clean images or the foreground objects placed on a white noise background. We then tested the models against the FGSM attack w/o background subtraction. With background subtraction, we essentially assume that the adversary has access only to the foreground object (i.e., effectively removing the perturbations that fall on the background). As results in Fig. 4 show, background subtraction improves the robustness substantially using both models and over both datasets.
To examine whether the above conclusion generalizes to more complex natural scenes, we ran a second experiment. First, we selected images from ten classes of the MS COCO dataset including chair, car, book, bottle, dinning table, umbrella, boat, motorcycle, sheep, and cow. Objects from these classes come with a segmentation mask (one object per image; 100 images per category; 1000 images in total). Around 32.7% of the image pixels fall inside the object bounding box and around 58.1% of the bounding box pixels fall inside the object mask. Fig. 5 shows a sample chair alongside its bounding box and its segmentation mask.
We then trained three ResNet-18 models (finetuned on ImageNet), one per each input type: 1) full image, 2) bounding box, and 3) segmented object (placed in a dark background). Models were trained on 70 images per category (700 in total) for 10 epochs and were then tested on the remaining 30 images per category. An attempt was made to tune the parameters to attain the best test accuracy in each case (e.g., by avoiding overfitting). The test accuracy9 of models in order are 66.9%, 78%, and 80.3%. One reason
behind lower prediction accuracy using boxes might be because multiple objects may fit inside
8Majority of such works are focused on model accuracy 9 Taori et al. (2020) argue that robustness scores should control for accuracy as more predictive models in
general are more robust. To avoid this issue we used models that have about the same standard accuracy.
the bounding box (e.g., for elongated objects such as broom). Model performance against FGSM and `∞ PGD-5 (Projected Gradient Descent by Madry et al. (2017)) adversarial attacks are shown in Fig. 5 (left panel). We observe that training models on segmented objects leads to higher adversarial robustness against both types of attacks. The improvement is more pronounced at higher perturbations. We also considered a condition in which we masked the perturbations that fall on the background, denoted as “Seg. Mask + FG” in the figure. We noticed even higher robustness against the attacks by removing the background perturbations. These results encourage using foreground detection as an effective adversarial defense.
The middle panel in Fig. 5 shows model robustness against noise corruptions (averaged over the 14 distortions used in Section 3.2.1). Here again, we find that using segmentation masks leads to higher robustness compared to the full image and object boxes. “Seg. Mask + FG” leads to the best robustness among the input types. While it might be hard to draw a general conclusion regarding the superiority of the segmentation masks over bounding boxes in object recognition accuracy, our investigation suggests that using masks leads to a significant boost in adversarial robustness with little or no drop in standard accuracy. Our results offer an upper bound in the utility of segmentation masks in robustness. More work is needed to incorporate this feat in CNNs (i.e., using attention).
3.3.1 ROBUSTNESS AGAINST GEOMETRIC TRANSFORMATIONS
We also tested the ResNet-18 model (i.e., trained over the full image, the bounding box, and the segmented object on ObjectNet; as above) against three geometric transformations including scaling, in-plane rotation, and horizontal translation. Fig. 6 shows the results over the 300 test images that were used in the previous subsection. We find that the model trained on segmentation masks is more robust than the other two models over all three geometric transformations, followed by the models trained on the object bounding boxes and the full image, in order.
3.4 QUALITATIVE INSPECTION OF OBJECTNET IMAGES AND ANNOTATIONS
During the annotation of ObjectNet images, we came across the following observations: a) Some objects look very different when they are in motion (e.g., the fan in row 4 of Fig. 34 in Appx. D), or when they are shadowed or occluded by other objects (e.g., the hammer in Fig. 34 row 4), b) Some object instances differ a lot from the typical instances in the same class (e.g., the helmet in Fig. 34 row 5; the orange in Fig. 33 row 5), c) Some objects can be recognized only through reading their captions (e.g., the pet food container in Fig. 33 row 2), d) Some images have wrong labels (e.g., the pillow in Fig. 33 row 2; the skirt in Fig. 33 row 1; the tray in Fig. 34 row 2; See also Appx. E), e) Some objects are extremely difficult for humans (e.g., the tennis racket in Fig. 34 row 4; the shovel in Fig. 33 row 4; the tray in Fig. 33 row 1), f) In many images, objects are occluded by hands holding them (e.g., the sock and the shovel in Fig. 33 row 4), g) Some objects are hard to recognize in dim light (e.g., the printer in Fig. 33 row 2), and h) Some categories are often confused with other categories in the same set. Example sets include {bath towel, bed sheet, full sized towel, dishrag or hand towel}, {sandal, dress shoe (men), running shoe}, {t-shirt, dress, sweater, suit jacket, skirt}, {ruler, spatula, pen, match}, {padlock, combination lock}, and {envelope, letter}. The left panel in Fig. 7 shows four easy (highly confident correct predictions) and four hard (highly confident misclassifications) for ResNet-152 over six ObjectNet categories. In terms of the difficulty level, easy(difficult) objects for models appear easy(difficult) to humans too. Also, our qualitative inspection shows that ObjectNet includes a large number of objects that can be recognized only after a careful examination (the right panel in Fig. 7). More examples are given in Appx. C.
4 TAKEAWAYS AND DISCUSSION
Our investigation reveals that deep models perform significantly better when applied to isolated objects rather than the entire scene. The reason behind this is two-fold. First, there is less variability in single objects compared to scenes containing multiple objects. Second, deep models (used here and also in ObjectNet paper) have been trained on ImageNet images which are less cluttered compared to the ObjectNet images. We anticipate that training models from scratch on large scale datasets that contain isolated objects will likely result in even higher accuracy. Assuming around 30% increase in performance (at best) on top of Barbu et al.’s results using bounding boxes still leaves a large gap of at least 15% between ImageNet and ObjectNet which means that ObjectNet is indeed much harder. It covers a wider range of variations than ImageNet including object instances, viewpoints, rotations, occlusions, etc which pushes the limits of object recognition models. Hence, despite its limitations and biases, ObjectNet dataset remains a great platform to test deep models in realistic situations.
We envision four research directions for the future work in this area. First, background subtraction is a promising mechanism and should be investigated further over large scale datasets (given the availability of high-resolution masks; e.g., MS COCO). We found that it improves robustness substantially over various types of image perturbations and attacks. Humans can discern the foreground object from the image background with high precision. This feat might be the key to robustness and hints towards an interplay and feedback loop between recognition and segmentation that is currently missing in CNNs. Second, measuring human performance on ObjectNet will provide a useful baseline for gauging model performance. Barbu et. al report an accuracy of around 95% when they asked subjects to mention the objects that are present in the scene. This task, however, is different from recognizing isolated objects similar to the regime that was considered here (i.e., akin to rapid scene categorization tasks; See Serre et al. (2007)). Besides, error patterns of models and humans (e.g., Borji & Itti (2014)), in addition to crude accuracy measures, will inform us about the differences in object recognition mechanisms between humans and machines. It could be that models work in a completely different fashion than the human visual system. Third, as discussed in Section 3.1, multi-label prediction accuracy is more appropriate for evaluating recognition models. Annotating all objects in ObjectNet images will thus provide an additional dimension to assess models. In this regard, we propose a new task where the goal is to recognize objects in their natural contexts. This task resembles (cropped) object recognition and object detection, but it is slightly different (i.e., the goal here is to recognize an object limited by a bounding box given all available information in the scene). This is essentially an argument against the recognition-detection dichotomy. Finally, it would be interesting to see how well the state of the art object detectors perform on the ObjectNet dataset (e.g., over overlapped classes between ObjectNet and MS COCO (Lin et al., 2014)). We expect a significant drop in detection performance since it is hard to recognize objects in this dataset.
From a broader perspective, our study reinforces the idea that there is more to scene understanding then merely learning statistical correlations. In particular, background subtraction and visual context are crucial in robust recognition and demand further investigation in future studies.
A FREQUENCY OF THE IMAGES PER CATEGORY
B MODEL ACCURACY PER CATEGORY USING BOXES VS. FULL IMAGE
C EASIEST AND HARDEST OBJECTS FOR THE RESNET-152 MODEL
D SOME CHALLENGING EXAMPLES FOR HUMANS
E ANNOTATION ISSUES IN OBJECT RECOGNITION DATASETS
F ANALYSING MODEL ROBUSTNESS OVER NATURALLY DISTORTED IMAGES
G ADVERSARIAL DEFENSE USING FOREGROUND DETECTION ON MNIST AND FASHION MNIST
H STATISTICS OF OBJECTNET DATASET | 1. What is the main contribution of the paper regarding object recognition?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. How does the reviewer assess the novelty and significance of the work?
4. Are there any suggestions for improving the paper or its contributions? | Review | Review
OVERVIEW: The authors present a follow-up to the prior work of Barbu et al on the task of Object Recognition* (name confusion addressed in cons below). Barbu et al demonstrated that on a more realistic dataset like ObjectNet, models trained on a clean dataset like ImageNet suffer significant degradation. This work reduces the performance gap by cropping out the object using bounding box or mask information and running the recognition model on top of it. They do this for a variety of models (AlexNet, VGG-19, ResNet-152, Inception-v4, NASNet-A, PNASNet-5L) and transformations (image distortions, adversarial perturbations, context, geometric transformations).
PROS:
The paper is well-written and tackles an important topic of object recognition* in the wild. It tries to move away from the ImageNet driven approach that is currently present in the community to a more realistic scenario.
They build on the prior work of Barbu et al and are able to reduce performance gap demonstrated by Barbu et al by using bounding box or mask cropped images of the object of interest.
They present a lot of experimental evaluation using a variety of models and transformations and demonstrate that their findings hold across all settings.
CONS:
I agree with the authors that ImageNet with a single (or few) object present in the center of the image with clear foreground-background separation is unrealistic. ObjectNet is a better snapshot of the real world and models trained on ImageNet suffer in ObjectNet. However, the proposed approach to crop out the object using bounding box information or mask information is moving the data distribution from the real world setting of ObjectNet closer to the ideal setting of ImageNet. This then leads to an expected improvement in performance. I appreciate the thoroughness of the results and evaluation presented but it does not feel like a novel contribution in my opinion.
The authors present 4 future research directions in Section 4. I would be more willing to accept the paper if one of these research directions is incorporated as a contribution. For example, the last research direction of applying an object detection model trained with MS COCO on ObjectNet images instead of an image classification model trained with ImageNet is something that is doable. I would encourage the authors to even consider an object detection model trained on LVIS which has a larger number of object categories. This moves further away from object recognition* to object detection + classification but the latter is what we would typically encounter in a real-world scenario. Even here, I would need some contribution or novel analysis besides re-running current experiments with detection models.
I strongly recommend a change of name to "Contemplating Real-World Object Classification" (no caps and classification instead of recognition). In my understanding, object recognition is a super-set of classification, detection, segmentation, etc. ImageNet leads to Image Classification models even if they are technically object classification. But sticking with the terminology of Object Recognition because it was used by Barbu et al is misleading. I would prefer Object Classification be used because that is the task of interest in this work.
REASON FOR RATING: I think there is an interesting problem of real-world object classification that is of significant importance and this work moves a little closer to analyzing possible ways to reduce the performance gap from ImageNet to ObjectNet. However, their key contribution is somewhat expected (not novel) and needs some more work before being conference-paper ready.
UPDATE: I have read the author feedback and the other reviews/discussions. I keep my original rating of 5. I think multiple authors raised the question of novelty relative to Barbu et al and the authors argue that they demonstrate the importance of context (whole image vs bounding box vs instance mask) for object recognition. Section 3.3 and Figure 5 is helpful in demonstrating it. However, the experimental setup is very limited (700 train + 300 test). COCO has 110K train and 5K val images and many more objects. If you argue that only 10 categories are common between COCO and ObjectNet, how many are common between LVIS and ObjectNet? I would strongly encourage the authors to leverage these pre-trained models and sharpen their message & contributions. I think they provide empirical justifications (important to the community) for expected results in moving from image to bounding box (same comment from multiple reviewers) but they need to de-emphasize that aspect and emphasize their results on context and robustness. A revision and resubmission to a different conference is encouraged. |
ICLR | Title
Contemplating Real-World Object Classification
Abstract
Deep object recognition models have been very successful over benchmark datasets such as ImageNet. How accurate and robust are they to distribution shifts arising from natural and synthetic variations in datasets? Prior research on this problem has primarily focused on ImageNet variations (e.g., ImageNetV2, ImageNet-A). To avoid potential inherited biases in these studies, we take a different approach. Specifically, we reanalyze the ObjectNet dataset1 recently proposed by Barbu et al. containing objects in daily life situations. They showed a dramatic performance drop of the state of the art object recognition models on this dataset. Due to the importance and implications of their results regarding the generalization ability of deep models, we take a second look at their analysis. We find that applying deep models to the isolated objects, rather than the entire scene as is done in the original paper, results in around 20-30% performance improvement. Relative to the numbers reported in Barbu et al., around 10-15% of the performance loss is recovered, without any test time data augmentation. Despite this gain, however, we conclude that deep models still suffer drastically on the ObjectNet dataset. We also investigate the robustness of models against synthetic image perturbations such as geometric transformations (e.g., scale, rotation, translation), natural image distortions (e.g., impulse noise, blur) as well as adversarial attacks (e.g., FGSM and PGD-5). Our results indicate that limiting the object area as much as possible (i.e., from the entire image to the bounding box to the segmentation mask) leads to consistent improvement in accuracy and robustness. Finally, through a qualitative analysis of ObjectNet data, we find that i) a large number of images in this dataset are hard to recognize even for humans, and ii) easy (hard) samples for models match with easy (hard) samples for humans. Overall, our analyses show that ObjecNet is still a challenging test platform for evaluating the generalization ability of models. Code and data are available at https://github.com/aliborji/ObjectNetReanalysis.git.
1 INTRODUCTION
Object recognition3 can be said to be the most basic problem in vision sciences. It is required in the early stages of visual processing before a system, be it a human or a machine, can accomplish other tasks such as searching, navigating, or grasping. Application of a convolutional neural network architecture (CNN) known as LeNet (LeCun et al., 1998), albeit with new bells and whistles (Krizhevsky et al., 2012), revolutionized not only computer vision but also several other areas. With the initial excitement gradually damping, researchers have started to study the shortcomings of deep models and question their generalization ability. From prior research, we already know that CNNs: a) lack generalization to out of distribution samples (e.g., Recht et al. (2019); Barbu et al. (2019); Shankar et al. (2020); Taori et al. (2020); Koh et al. (2020)). Even after being exposed to many different instances of the same object category, they fail to fully capture the concept. In stark contrast, humans can generalize from only few examples (a.k.a few-shot learning), b) perform poorly when applied to transformed versions of the same object. In other words, they
1https://objectnet.dev/ 2See https://openreview.net/forum?id=Q4EUywJIkqr for reviews and discussions. A prelimnary version of this work has been published in Arxiv (Borji, 2020). 3Classification of an object appearing lonely in an image. For images containing multiple objects, object localization or detection is required first.
are not invariant to spatial transformations (e.g., translation, in-plane and in-depth rotation, scale) as shown in (Azulay & Weiss, 2019; Engstrom et al., 2019; Fawzi & Frossard, 2015), as well as noise corruptions (Hendrycks & Dietterich, 2019; Geirhos et al., 2018b), and c) are vulnerable to imperceptible adversarial image perturbations (Szegedy et al., 2013; Goodfellow et al., 2014; Nguyen et al., 2015). Majority of these works, however, have used either the ImageNet dataset or its variations, and thus might be biased towards ImageNet characteristics. Utilizing a very challenging dataset that has been proposed recently, known as ObjectNet (Barbu et al., 2019), here we seek to answer how well the state of the art CNNs generalize to real world object recognition scenarios. We also explore the role of spatial context in object recognition and answer whether it is better to use cropped objects (using bounding boxes) or segmented objects to achieve higher accuracy and robustness. Furthermore, we study the relationship between object recognition, scene understanding, and object detection. These are important problems that have been less explored.
Several datasets have been proposed for training and testing object recognition models, and to study their generalization ability (e.g., ImageNet by Deng et al. (2009), Places by Zhou et al. (2017), CIFAR by Krizhevsky et al. (2009), NORB by LeCun et al. (2004), and iLab20M by Borji et al. (2016)). As the most notable one, ImageNet dataset has been very instrumental for gauging the progress in object recognition over the past decade. A large number of studies have tested new ideas by training deep models on ImageNet (from scratch), or by finetuning pre-trained (on ImageNet) classification models on other datasets. With the ImageNet being retired, the state of the object recognition problem remains unclear. Several questions such as out of distribution generalization, “superhuman performance” (He et al., 2016) and invariance to transformations persist. To rekindle the discourse, recently Barbu et al. (2019) introduced the ObjectNet dataset which according to their claim has less bias than other recognition datasets4. This dataset is supposed to be used solely as a test set and comes with a licence that disallows the researchers to finetune models on it. Images are pictured by Mechanical Turk workers using a mobile app in a variety of backgrounds, rotations, and imaging viewpoints. ObjectNet contains 50,000 images across 313 categories, out of which 113 are in common with ImageNet categories. Astonishingly, Barbu et al. found that the state of the art object recognition models perform drastically lower on ObjectNet compared to their performance on ImageNet (about 40-45% drop). Our principal goal here it to revisit the Barbu et al.’s analysis and measure the actual performance drop on ObjectNet compared to ImageNet. To this end, we limit our analysis to the 113 overlapped categories between the two datasets. We first annotate the objects in the ObjectNet scenes by drawing boxes around them. We then apply a number of deep models on these object boxes and find that models perform significantly better now, compared to their performance on the entire scene (as is done in Barbu et. al). Interestingly, and perhaps against the common belief, we also find that training and testing models on segmented objects, rather than the object bounding box or the full image, leads to consistent improvement in accuracy and robustness over a range of classification tasks and image transformations (geometric, natural distortions, and adversarial attacks). Lastly, we provide a qualitative (and somewhat anecdotal) analysis of extreme cases in object recognition for humans and machines.
2 RELATED WORK
Robustness against synthetic distribution shifts. Most research on assessing model robustness has been focused on synthetic image perturbations (e.g., spatial transformations, noise corruptions, simulated weather artifacts, temporal changes (Gu et al., 2019), and adversarial examples) perhaps because it is easy to precisely define, implement, and apply them to arbitrary images. While models have improved significantly in robustness to these distribution shifts (e.g., Zhang (2019); Zhang et al. (2019); Cohen & Welling (2016)), they are still not as robust as humans. Geirhos et al. (2018b) showed that humans are more tolerant against image manipulations like contrast reduction, additive noise, or novel eidolon-distortions than models. Further, humans and models behave differently (witnessed by different error patterns) as the signal gets weaker. Zhu et al. (2016) contrast the influence of the foreground object and image background on the performance of humans and models.
Robustness against natural distribution shifts. Robustness on real data is a clear challenge for deep neural networks. Unlike synthetic distribution shifts, it is difficult to define distribution shifts that occur naturally in the real-world (such as subtle changes in scene composition, object types, and lighting conditions). Recht et al. (2019) closely followed the original ImageNet creation process
4ObjectNet dataset, however, has it own biases. It consists of indoor objects that are available to many people, are mobile, are not too large, too small, fragile or dangerous.
to build a new test set called ImageNetV2. They reported a performance gap of about 11% (top-1 acc.) between the performance of the best deep models on this dataset and the original test set. Similar observations have been made by Shankar et al. (2020). By evaluating 204 ImageNet models in 213 different test conditions, Taori et al. (2020) found that a) current synthetic robustness does not imply natural robustness. In other words, robustness measures for synthetic distribution shifts are weakly predictive of robustness on the natural distribution shifts, b) robustness measurements should control for accuracy since higher robustness can sometimes be explained by the higher accuracy on a standard unperturbed test set, and c) training models on larger and more diverse data improves robustness but does not lead to full closure of the performance gap. A comprehensive benchmark of distribution shifts in the wild, known as WILDS, has recently been published by Koh et al. (2020), encompassing different data modalities including vision. In D’Amour et al. (2020), authors regard “underspecification” a major challenge to the credibility and generalization of modern machine learning pipelines. An ML pipeline is underspecified when it returns models that perform very well on held-out test sets during training but perform poorly at deployment time.
Contextual interference. Context plays a significant role in pattern recognition and visual reasoning (e.g., Bar (2004); Torralba & Sinha (2001); Rabinovich et al. (2007); Heitz & Koller (2008); Galleguillos & Belongie (2010)). The extent to which visual context is being used by deep models is still unclear. Unlike models, humans are very good at exploiting context when it is helpful and discard it when it causes ambiguity. In other words, deep models do not understand what is the foreground object and what constitutes the background5. Nagarajan et al. (2020) mention that ML models utilize features (e.g., image background) which are spuriously correlated with the label during training. This makes them fragile at the test time when statistics slightly differ. As we argue here, this is one of the main reasons why deep models are so vulnerable to geometric and adversarial perturbations. Geirhos et al. (2020) have studied this phenomenon under the “shortcut learning” terminology from a broader perspective.
Insights from human vision. CNNs turn out to be good models of human vision and can explain the first feed-forward sweep of information (See Kriegeskorte (2015) for a review). They, however, differ from human visual processing in several important ways. Current object recognition methods do not rely on segmentation, whereas figure-ground segmentation plays a significant role in human vision, in particular for the encoding of spatial relations between 3D object parts (Biederman, 1987; Serre, 2019). Some computer vision works, predating deep learning, have also shown that pre-segmenting the image before applying the recognition algorithms, improves the accuracy (Malisiewicz & Efros, 2007; Rabinovich et al., 2007; Rosenfeld & Weinshall, 2011). Unlike the human vision system, CNNs are hindered drastically in crowded scenes (e.g., Volokitin et al. (2017)). CNNs rely more on texture whereas humans pay more attention to shape (Geirhos et al., 2018a). Utilizing minimal recognizable images, Ullman et al. (2016) argued that the human visual system uses features and processes that are not used by current deep models.
5As an example, consider a model that is trained to classify camels vs. cows, with camels always shown in sandy backgrounds and cows shown against grassy backgrounds. Although such a model does well during training, it gets confused when presented with cows in sandy backgrounds at test time (Beery et al., 2018). See also Rosenfeld et al. (2018) for another example in the context of object detection
O ur
a na
ly si
s
O bj
ec tN
et p
ap er
< Recognizers by year >
13.86
31.19
ObjectNet Top-5 (box) ObjectNet Top-1 (box)
49.84
61.50
39.48
27.89
25-35% performance drop A cc
ur ac
y %
A cc
ur ac
y %
40-45% performance drop
Using our code
3 EXPERIMENTS AND RESULTS
3.1 ACCURACY AND ROBUSTNESS AGAINST NATURAL DISTRIBUTION SHIFTS
A critic of Barbu et al. (2019). Barbu et al.’s work is a great contribution to the field to answer how well object recognition models generalize to the real-world circumstances and to control for biases in data collection. It, however, suffers from a major shortcoming that is making no distinction between “object detection” and “object recognition”. This confusion brings along several concerns:
1. They use the term “object detector” to refer to “object recognition” models. Object detection and object recognition are two distinct, yet related, tasks. Each one has its own models, datasets, evaluation measures, and inductive biases. For example, as shown in Fig. 1, images in object recognition datasets (e.g., ImageNet) often contain a single object, usually from a closeup view, whereas scenes in object detection datasets (e.g., MS COCO (Lin et al., 2014), OpenImages (Kuznetsova et al., 2018)) usually have multiple objects. Objects in the detection datasets vary more in some parameters such as occlusion and size. For instance, there is a larger variation in object scale in detection datasets (Singh & Davis, 2018). This discussion also relates to the distinction between “scene understanding” and “object recognition”. To understand a complex scene, as humans we look around, fixate on individual objects to recognize them, and accumulate information over fixations to perform more complex tasks such as answering a question or describing an event. To avoid biases in recognition datasets (e.g., typical scales or object views), we propose to (additionally) use detection datasets to study object recognition. We will discuss this further in Section 4.
2. Instead of applying models to isolated objects, Barbu et al. apply them to cluttered scenes containing multiple objects. Unlike ImageNet where the majority of images include only a single object, ObjectNet images have multiple objects in them and are often more cluttered. Therefore, the drop in performance of models on ObjectNet can be merely due to the fact that pretrained models on ImageNet have been trained on individual objects.
3. In addition to top-1 accuracy, Barbu et al. also report top-5 accuracy. One might argue that this may suffice in dealing with scenes containing multiple objects. Top-5 accuracy was first introduced in Russakovsky et al. (2015) to remedy the issues with the top-1 accuracy. The latter can be overly stringent by penalizing predictions that appear in the image but do not correspond to the target label. Top-5 accuracy itself, however, has two shortcomings. First, a model can still be penalized if all of the five guesses exist in the image, but none is the image label. Both scores fall short in addressing the images with counter-intuitive labels (e.g., when non-salient objects are labeled; Appx. E). Second, on fine-grained classification tasks (ImageNet has several fine-grained classes e.g., dogs), allowing five
predictions can make certain class distinctions trivial (Shankar et al., 2020). For example, there are five turtles in the ImageNet class hierarchy (mud turtle, box turtle, loggerhead turtle, leatherback turtle, and terrapin) that are difficult to distinguish. A classifier may trick the score by generating all of these labels for a turtle image to ensure it predicts the correct label. Shankar et al. proposed to use multi-label accuracy as an alternative to top-5 score. Each image has a set of target labels (i.e., multi-label annotations). A prediction is marked correct if it corresponds to any of the target labels for that image. This score, however, may favor a model that generates correct labels but may confuse the locations over a model that is spatially more precise but misses some objects (See also Beyer et al. (2020)). Regardless, since multi-label annotations for ObjectNet are not available, we report both top-1 and top-5 scores when feeding isolated objects to models.
Bounding box annotation. The 113 object categories in the ObjectNet dataset, overlapped with the ImageNet, contain 18,574 images in total. On this subset, the average number of images per category is 164.4 (min=55, max=284). Fig. 8 in Appx. A shows the distribution of the number of images per category on this dataset (envelope and dish drying rack are the most and least frequent objects, respectively). We drew a bounding box around the object corresponding to the category label of each image. If there were multiple nearby objects from the same category (e.g., chairs around a table), we tried to include all of them in the bounding box. Some example scenes and their corresponding bounding boxes are given in Fig. 1. Appx. H shows more stats on ObjectNet. Object recognition results. We employ six widely-used state of the art deep neural networks including AlexNet (Krizhevsky et al., 2012), VGG-19 (Simonyan & Zisserman, 2014), GoogLeNet (Szegedy et al., 2015), ResNet-152 (He et al., 2016), Inception-v3 (Szegedy et al., 2016)6, and MNASNet (Tan et al., 2019). AlexNet, VGG-19, and ResNet-152 have also been used in the ObjectNet paper (Barbu et al., 2019). We use the PyTorch implementation of these models7. Since the code from the ObjectNet paper is unavailable (at the time of preparing this work), in addition to applying models to bounding boxes and plotting the results on top of the results from the ObjectNet paper, we also run our code on both the bounding boxes and the full images. This allows a fair comparison and helps mitigate possible inconsistency in data processing methods (e.g., different data normalization schemes or test time data augmentation such as rotation, scale, color jittering, cropping, etc.).
Fig. 2 shows an overlay of our results in Fig. 1 from the ObjectNet paper. As can be seen, applying models to the object bounding box instead of the entire scene improves the accuracy about 10-15%. Although the gap is narrower now, models still significantly underperform on ObjectNet than the ImageNet dataset. Using our code, the improvement going from full image to bounding boxes is around 20-30% across all tested models (the right panel in Fig. 2). Our results using the full image are lower than Barbu et al.’s results using the full image (possibly because we do not utilize data augmentation). This relative difference entails that applying their code to bounding boxes will likely improve the performance beyond 10% that we obtained here. Assuming 25% gain in performance on top of their best results when using boxes, will still not close the performance gap which indicates that ObjectNet remains a challenging dataset for testing object recognition models.
Breakdown of accuracy over the 113 categories is shown in Appx. B (Figs. 9 & 10 over isolated objects and Figs. 11 & 12 over the full image). Interestingly, in both cases, almost all models, except GoogLeNet on isolated objects and AlexNet on the full image, perform the best over the safety pin category. Inspecting the images from this class, we found that they have a single safety pin often held by a person (perhaps about the same distance from the camera thus similar scales). The same story is true about the banana class which is the second easiest category using the bounding boxes. This object becomes much harder to recognize when using the full image (26.88% vs. 70.3% using boxes) which highlights the benefit of applying models to isolated objects rather than scenes.
3.2 ACCURACY AND ROBUSTNESS AGAINST SYNTHETIC DISTRIBUTION SHIFTS
3.2.1 ROBUSTNESS AGAINST COMMON IMAGE CORRUPTIONS
Previous work has shown that ImageNet-trained CNNs generalize poorly over a wide range of image distortions (e.g., Hendrycks & Dietterich (2019); Azulay & Weiss (2019); Dodge & Karam (2017)). These works, however, have applied CNNs to the whole scene. Here, we
6Barbu et al. have used Inception-v4. 7https://pytorch.org/docs/stable/torchvision/models.html
ask whether applying the models to the bounding boxes can improve robustness against image distortions. Following Hendrycks & Dietterich (2019), we systematically test how model accuracy degrades if images are corrupted by 14 different types of distortions including Gaussian noise, shot noise, impulse noise, defocus blur, glass blur, motion blur, zoom blur, snow, frost, fog, brightness, contrast, elastic transform, and JPEG compression at 3 levels of corruption severity. Fig. 36 (Appx. F) shows sample images along with their distortions. Ten images from each of the 113 categories of ObjectNet (1130 images in total) were fed to three models including VGG-19, Inception-v3, and ResNet-152.
Aggregate results over the full image and the object bounding box (both resized to 224 × 224 pixels) are shown in Fig. 3. All three models are more robust when applied to the object bounding box than the full image at all corruption levels, using both top-1 and top-5 scores (left two panels). Among models, ResNet-152 performs better and is the most robust model. It is followed by the Inception-v3 model. For nearly all of the 113 object categories, using bounding boxes leads to higher robustness than using the full image (the third panel). Similarly, using bounding boxes results in higher robustness against all distortion types (the right-most panel). Across distortion types, shown in Figs. 37 & 38 (Appx. F), ResNet-152 consistently outperforms the other two models at all severity levels, followed by Inception-v3. It seems that models are hindered more by impulse noise, frost, zoom blur, and snow distortions. The top-1 accuracy at severity level 2 on these distortions is below 20%. Overall, we conclude that limiting the object area only to the bounding box leads not only to higher prediction accuracy but also to higher robustness against image distortions. Extrapolating this approach, can we improve robustness by shrinking the object region even further by using the segmentation masks? We will thoroughly investigate this question in the next subsections.
3.2.2 ROBUSTNESS AGAINST ADVERSARIAL PERTURBATIONS
Despite being very accurate, CNNs are highly vulnerable to adversarial inputs (Szegedy et al., 2013; Goodfellow et al., 2014). These inputs are crafted carefully and maliciously by adding small imperceptible perturbations to them (e.g., altering the value of a pixel up to 8 units under the `∞-norm; pixels in the range [0, 255]). Here we apply the ImageNet pretrained models to 1130 images that were selected above. The models are tested against the Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2014) at two perturbation budgets in the untargeted white-box setting.
Table. 1 shows the results. We find that models are more resilient against the FGSM attack when applied to the bounding box than the full image. While the input size is the same in both cases (224×224), the adversary has more opportunity to mislead the
classifier in the full image case since a larger fraction of pixels play an insignificant role in the decisions made by the network. This aligns with observations from the visualization tools (e.g., Selvaraju et al. (2017)) revealing that CNNs indeed rely only on a small subset of image pixels to elicit a decision. One might argue that the lower robustness on the full images could be due to training and test discrepancy (i.e., training models on single objects and applying them to the entire scene). To address this, in the next subsection we train and test models in the same condition.
3.3 THE INFLUENCE OF THE SURROUNDING CONTEXT ON ROBUSTNESS
Despite a large body of literature on whether and how much visual context benefits CNNs in terms of accuracy and robustness8, the matter has not been settled yet (e.g., Bar (2004); Torralba & Sinha (2001); Rabinovich et al. (2007); Rosenfeld et al. (2018); Heitz & Koller (2008); Divvala et al. (2009); Zhu et al. (2016); Xiao et al. (2020); Malisiewicz & Efros (2007)). To study how context surrounding an object impacts model accuracy and robustness in more detail, we conducted
two experiments. In the first one, we trained two CNNs (2 conv layers, each followed by a pooling layer and 2 final fc layers) on MNIST and Fashion MNIST datasets, for which it is easy to derive the foreground masks (Figs. 39 & 40; Appx. G). CNNs were trained on either the original clean images or the foreground objects placed on a white noise background. We then tested the models against the FGSM attack w/o background subtraction. With background subtraction, we essentially assume that the adversary has access only to the foreground object (i.e., effectively removing the perturbations that fall on the background). As results in Fig. 4 show, background subtraction improves the robustness substantially using both models and over both datasets.
To examine whether the above conclusion generalizes to more complex natural scenes, we ran a second experiment. First, we selected images from ten classes of the MS COCO dataset including chair, car, book, bottle, dinning table, umbrella, boat, motorcycle, sheep, and cow. Objects from these classes come with a segmentation mask (one object per image; 100 images per category; 1000 images in total). Around 32.7% of the image pixels fall inside the object bounding box and around 58.1% of the bounding box pixels fall inside the object mask. Fig. 5 shows a sample chair alongside its bounding box and its segmentation mask.
We then trained three ResNet-18 models (finetuned on ImageNet), one per each input type: 1) full image, 2) bounding box, and 3) segmented object (placed in a dark background). Models were trained on 70 images per category (700 in total) for 10 epochs and were then tested on the remaining 30 images per category. An attempt was made to tune the parameters to attain the best test accuracy in each case (e.g., by avoiding overfitting). The test accuracy9 of models in order are 66.9%, 78%, and 80.3%. One reason
behind lower prediction accuracy using boxes might be because multiple objects may fit inside
8Majority of such works are focused on model accuracy 9 Taori et al. (2020) argue that robustness scores should control for accuracy as more predictive models in
general are more robust. To avoid this issue we used models that have about the same standard accuracy.
the bounding box (e.g., for elongated objects such as broom). Model performance against FGSM and `∞ PGD-5 (Projected Gradient Descent by Madry et al. (2017)) adversarial attacks are shown in Fig. 5 (left panel). We observe that training models on segmented objects leads to higher adversarial robustness against both types of attacks. The improvement is more pronounced at higher perturbations. We also considered a condition in which we masked the perturbations that fall on the background, denoted as “Seg. Mask + FG” in the figure. We noticed even higher robustness against the attacks by removing the background perturbations. These results encourage using foreground detection as an effective adversarial defense.
The middle panel in Fig. 5 shows model robustness against noise corruptions (averaged over the 14 distortions used in Section 3.2.1). Here again, we find that using segmentation masks leads to higher robustness compared to the full image and object boxes. “Seg. Mask + FG” leads to the best robustness among the input types. While it might be hard to draw a general conclusion regarding the superiority of the segmentation masks over bounding boxes in object recognition accuracy, our investigation suggests that using masks leads to a significant boost in adversarial robustness with little or no drop in standard accuracy. Our results offer an upper bound in the utility of segmentation masks in robustness. More work is needed to incorporate this feat in CNNs (i.e., using attention).
3.3.1 ROBUSTNESS AGAINST GEOMETRIC TRANSFORMATIONS
We also tested the ResNet-18 model (i.e., trained over the full image, the bounding box, and the segmented object on ObjectNet; as above) against three geometric transformations including scaling, in-plane rotation, and horizontal translation. Fig. 6 shows the results over the 300 test images that were used in the previous subsection. We find that the model trained on segmentation masks is more robust than the other two models over all three geometric transformations, followed by the models trained on the object bounding boxes and the full image, in order.
3.4 QUALITATIVE INSPECTION OF OBJECTNET IMAGES AND ANNOTATIONS
During the annotation of ObjectNet images, we came across the following observations: a) Some objects look very different when they are in motion (e.g., the fan in row 4 of Fig. 34 in Appx. D), or when they are shadowed or occluded by other objects (e.g., the hammer in Fig. 34 row 4), b) Some object instances differ a lot from the typical instances in the same class (e.g., the helmet in Fig. 34 row 5; the orange in Fig. 33 row 5), c) Some objects can be recognized only through reading their captions (e.g., the pet food container in Fig. 33 row 2), d) Some images have wrong labels (e.g., the pillow in Fig. 33 row 2; the skirt in Fig. 33 row 1; the tray in Fig. 34 row 2; See also Appx. E), e) Some objects are extremely difficult for humans (e.g., the tennis racket in Fig. 34 row 4; the shovel in Fig. 33 row 4; the tray in Fig. 33 row 1), f) In many images, objects are occluded by hands holding them (e.g., the sock and the shovel in Fig. 33 row 4), g) Some objects are hard to recognize in dim light (e.g., the printer in Fig. 33 row 2), and h) Some categories are often confused with other categories in the same set. Example sets include {bath towel, bed sheet, full sized towel, dishrag or hand towel}, {sandal, dress shoe (men), running shoe}, {t-shirt, dress, sweater, suit jacket, skirt}, {ruler, spatula, pen, match}, {padlock, combination lock}, and {envelope, letter}. The left panel in Fig. 7 shows four easy (highly confident correct predictions) and four hard (highly confident misclassifications) for ResNet-152 over six ObjectNet categories. In terms of the difficulty level, easy(difficult) objects for models appear easy(difficult) to humans too. Also, our qualitative inspection shows that ObjectNet includes a large number of objects that can be recognized only after a careful examination (the right panel in Fig. 7). More examples are given in Appx. C.
4 TAKEAWAYS AND DISCUSSION
Our investigation reveals that deep models perform significantly better when applied to isolated objects rather than the entire scene. The reason behind this is two-fold. First, there is less variability in single objects compared to scenes containing multiple objects. Second, deep models (used here and also in ObjectNet paper) have been trained on ImageNet images which are less cluttered compared to the ObjectNet images. We anticipate that training models from scratch on large scale datasets that contain isolated objects will likely result in even higher accuracy. Assuming around 30% increase in performance (at best) on top of Barbu et al.’s results using bounding boxes still leaves a large gap of at least 15% between ImageNet and ObjectNet which means that ObjectNet is indeed much harder. It covers a wider range of variations than ImageNet including object instances, viewpoints, rotations, occlusions, etc which pushes the limits of object recognition models. Hence, despite its limitations and biases, ObjectNet dataset remains a great platform to test deep models in realistic situations.
We envision four research directions for the future work in this area. First, background subtraction is a promising mechanism and should be investigated further over large scale datasets (given the availability of high-resolution masks; e.g., MS COCO). We found that it improves robustness substantially over various types of image perturbations and attacks. Humans can discern the foreground object from the image background with high precision. This feat might be the key to robustness and hints towards an interplay and feedback loop between recognition and segmentation that is currently missing in CNNs. Second, measuring human performance on ObjectNet will provide a useful baseline for gauging model performance. Barbu et. al report an accuracy of around 95% when they asked subjects to mention the objects that are present in the scene. This task, however, is different from recognizing isolated objects similar to the regime that was considered here (i.e., akin to rapid scene categorization tasks; See Serre et al. (2007)). Besides, error patterns of models and humans (e.g., Borji & Itti (2014)), in addition to crude accuracy measures, will inform us about the differences in object recognition mechanisms between humans and machines. It could be that models work in a completely different fashion than the human visual system. Third, as discussed in Section 3.1, multi-label prediction accuracy is more appropriate for evaluating recognition models. Annotating all objects in ObjectNet images will thus provide an additional dimension to assess models. In this regard, we propose a new task where the goal is to recognize objects in their natural contexts. This task resembles (cropped) object recognition and object detection, but it is slightly different (i.e., the goal here is to recognize an object limited by a bounding box given all available information in the scene). This is essentially an argument against the recognition-detection dichotomy. Finally, it would be interesting to see how well the state of the art object detectors perform on the ObjectNet dataset (e.g., over overlapped classes between ObjectNet and MS COCO (Lin et al., 2014)). We expect a significant drop in detection performance since it is hard to recognize objects in this dataset.
From a broader perspective, our study reinforces the idea that there is more to scene understanding then merely learning statistical correlations. In particular, background subtraction and visual context are crucial in robust recognition and demand further investigation in future studies.
A FREQUENCY OF THE IMAGES PER CATEGORY
B MODEL ACCURACY PER CATEGORY USING BOXES VS. FULL IMAGE
C EASIEST AND HARDEST OBJECTS FOR THE RESNET-152 MODEL
D SOME CHALLENGING EXAMPLES FOR HUMANS
E ANNOTATION ISSUES IN OBJECT RECOGNITION DATASETS
F ANALYSING MODEL ROBUSTNESS OVER NATURALLY DISTORTED IMAGES
G ADVERSARIAL DEFENSE USING FOREGROUND DETECTION ON MNIST AND FASHION MNIST
H STATISTICS OF OBJECTNET DATASET | 1. What are the limitations of real-world applications in computer vision datasets?
2. How do the authors contribute to investigating these challenges in their paper?
3. What are some strengths and weaknesses of the paper's approach to analyzing real-world object detection challenges?
4. Are there any concerns regarding the authors' methodology in applying synthetic distortions to images?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Review | Review
Real-world applications frequently provide challenges that are not seen in common computer vision datasets like Imagenet, where images are blurry, dark, corrupted, objects are highly occluded, test objects may be out of distribution due to natural distribution shifts, etc. This phenomenon was investigated in 2018 by Beery et al, who similarly found categorization is easier with localization in challenging real-world scenes (ie classifying cropped boxes). I would recommend taking a look at that paper (citation below) and including it in the related work.
Beery, S., Van Horn, G., & Perona, P. (2018). Recognition in terra incognita. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 456-473).
Pros:
The authors provide many experiments digging into various aspects of what makes real-world object detection challenging. This is a useful reference point for future work.
Cons: This paper re-analyzes generalization in the context of an existing dataset. There is not anything particularly novel about the analysis, and similar results have been shown on other real-world datasets. The authors describe the image distortions they consider to be “natural”, but are applying them synthetically. It is not clear to me that applying a synthetic distortion of a type that can be seen in the real world is necessarily reflective of those realistic distortions in the wild. It would be better to explicitly collect examples of these types of distortions in real data and compare against that. |
ICLR | Title
Contemplating Real-World Object Classification
Abstract
Deep object recognition models have been very successful over benchmark datasets such as ImageNet. How accurate and robust are they to distribution shifts arising from natural and synthetic variations in datasets? Prior research on this problem has primarily focused on ImageNet variations (e.g., ImageNetV2, ImageNet-A). To avoid potential inherited biases in these studies, we take a different approach. Specifically, we reanalyze the ObjectNet dataset1 recently proposed by Barbu et al. containing objects in daily life situations. They showed a dramatic performance drop of the state of the art object recognition models on this dataset. Due to the importance and implications of their results regarding the generalization ability of deep models, we take a second look at their analysis. We find that applying deep models to the isolated objects, rather than the entire scene as is done in the original paper, results in around 20-30% performance improvement. Relative to the numbers reported in Barbu et al., around 10-15% of the performance loss is recovered, without any test time data augmentation. Despite this gain, however, we conclude that deep models still suffer drastically on the ObjectNet dataset. We also investigate the robustness of models against synthetic image perturbations such as geometric transformations (e.g., scale, rotation, translation), natural image distortions (e.g., impulse noise, blur) as well as adversarial attacks (e.g., FGSM and PGD-5). Our results indicate that limiting the object area as much as possible (i.e., from the entire image to the bounding box to the segmentation mask) leads to consistent improvement in accuracy and robustness. Finally, through a qualitative analysis of ObjectNet data, we find that i) a large number of images in this dataset are hard to recognize even for humans, and ii) easy (hard) samples for models match with easy (hard) samples for humans. Overall, our analyses show that ObjecNet is still a challenging test platform for evaluating the generalization ability of models. Code and data are available at https://github.com/aliborji/ObjectNetReanalysis.git.
1 INTRODUCTION
Object recognition3 can be said to be the most basic problem in vision sciences. It is required in the early stages of visual processing before a system, be it a human or a machine, can accomplish other tasks such as searching, navigating, or grasping. Application of a convolutional neural network architecture (CNN) known as LeNet (LeCun et al., 1998), albeit with new bells and whistles (Krizhevsky et al., 2012), revolutionized not only computer vision but also several other areas. With the initial excitement gradually damping, researchers have started to study the shortcomings of deep models and question their generalization ability. From prior research, we already know that CNNs: a) lack generalization to out of distribution samples (e.g., Recht et al. (2019); Barbu et al. (2019); Shankar et al. (2020); Taori et al. (2020); Koh et al. (2020)). Even after being exposed to many different instances of the same object category, they fail to fully capture the concept. In stark contrast, humans can generalize from only few examples (a.k.a few-shot learning), b) perform poorly when applied to transformed versions of the same object. In other words, they
1https://objectnet.dev/ 2See https://openreview.net/forum?id=Q4EUywJIkqr for reviews and discussions. A prelimnary version of this work has been published in Arxiv (Borji, 2020). 3Classification of an object appearing lonely in an image. For images containing multiple objects, object localization or detection is required first.
are not invariant to spatial transformations (e.g., translation, in-plane and in-depth rotation, scale) as shown in (Azulay & Weiss, 2019; Engstrom et al., 2019; Fawzi & Frossard, 2015), as well as noise corruptions (Hendrycks & Dietterich, 2019; Geirhos et al., 2018b), and c) are vulnerable to imperceptible adversarial image perturbations (Szegedy et al., 2013; Goodfellow et al., 2014; Nguyen et al., 2015). Majority of these works, however, have used either the ImageNet dataset or its variations, and thus might be biased towards ImageNet characteristics. Utilizing a very challenging dataset that has been proposed recently, known as ObjectNet (Barbu et al., 2019), here we seek to answer how well the state of the art CNNs generalize to real world object recognition scenarios. We also explore the role of spatial context in object recognition and answer whether it is better to use cropped objects (using bounding boxes) or segmented objects to achieve higher accuracy and robustness. Furthermore, we study the relationship between object recognition, scene understanding, and object detection. These are important problems that have been less explored.
Several datasets have been proposed for training and testing object recognition models, and to study their generalization ability (e.g., ImageNet by Deng et al. (2009), Places by Zhou et al. (2017), CIFAR by Krizhevsky et al. (2009), NORB by LeCun et al. (2004), and iLab20M by Borji et al. (2016)). As the most notable one, ImageNet dataset has been very instrumental for gauging the progress in object recognition over the past decade. A large number of studies have tested new ideas by training deep models on ImageNet (from scratch), or by finetuning pre-trained (on ImageNet) classification models on other datasets. With the ImageNet being retired, the state of the object recognition problem remains unclear. Several questions such as out of distribution generalization, “superhuman performance” (He et al., 2016) and invariance to transformations persist. To rekindle the discourse, recently Barbu et al. (2019) introduced the ObjectNet dataset which according to their claim has less bias than other recognition datasets4. This dataset is supposed to be used solely as a test set and comes with a licence that disallows the researchers to finetune models on it. Images are pictured by Mechanical Turk workers using a mobile app in a variety of backgrounds, rotations, and imaging viewpoints. ObjectNet contains 50,000 images across 313 categories, out of which 113 are in common with ImageNet categories. Astonishingly, Barbu et al. found that the state of the art object recognition models perform drastically lower on ObjectNet compared to their performance on ImageNet (about 40-45% drop). Our principal goal here it to revisit the Barbu et al.’s analysis and measure the actual performance drop on ObjectNet compared to ImageNet. To this end, we limit our analysis to the 113 overlapped categories between the two datasets. We first annotate the objects in the ObjectNet scenes by drawing boxes around them. We then apply a number of deep models on these object boxes and find that models perform significantly better now, compared to their performance on the entire scene (as is done in Barbu et. al). Interestingly, and perhaps against the common belief, we also find that training and testing models on segmented objects, rather than the object bounding box or the full image, leads to consistent improvement in accuracy and robustness over a range of classification tasks and image transformations (geometric, natural distortions, and adversarial attacks). Lastly, we provide a qualitative (and somewhat anecdotal) analysis of extreme cases in object recognition for humans and machines.
2 RELATED WORK
Robustness against synthetic distribution shifts. Most research on assessing model robustness has been focused on synthetic image perturbations (e.g., spatial transformations, noise corruptions, simulated weather artifacts, temporal changes (Gu et al., 2019), and adversarial examples) perhaps because it is easy to precisely define, implement, and apply them to arbitrary images. While models have improved significantly in robustness to these distribution shifts (e.g., Zhang (2019); Zhang et al. (2019); Cohen & Welling (2016)), they are still not as robust as humans. Geirhos et al. (2018b) showed that humans are more tolerant against image manipulations like contrast reduction, additive noise, or novel eidolon-distortions than models. Further, humans and models behave differently (witnessed by different error patterns) as the signal gets weaker. Zhu et al. (2016) contrast the influence of the foreground object and image background on the performance of humans and models.
Robustness against natural distribution shifts. Robustness on real data is a clear challenge for deep neural networks. Unlike synthetic distribution shifts, it is difficult to define distribution shifts that occur naturally in the real-world (such as subtle changes in scene composition, object types, and lighting conditions). Recht et al. (2019) closely followed the original ImageNet creation process
4ObjectNet dataset, however, has it own biases. It consists of indoor objects that are available to many people, are mobile, are not too large, too small, fragile or dangerous.
to build a new test set called ImageNetV2. They reported a performance gap of about 11% (top-1 acc.) between the performance of the best deep models on this dataset and the original test set. Similar observations have been made by Shankar et al. (2020). By evaluating 204 ImageNet models in 213 different test conditions, Taori et al. (2020) found that a) current synthetic robustness does not imply natural robustness. In other words, robustness measures for synthetic distribution shifts are weakly predictive of robustness on the natural distribution shifts, b) robustness measurements should control for accuracy since higher robustness can sometimes be explained by the higher accuracy on a standard unperturbed test set, and c) training models on larger and more diverse data improves robustness but does not lead to full closure of the performance gap. A comprehensive benchmark of distribution shifts in the wild, known as WILDS, has recently been published by Koh et al. (2020), encompassing different data modalities including vision. In D’Amour et al. (2020), authors regard “underspecification” a major challenge to the credibility and generalization of modern machine learning pipelines. An ML pipeline is underspecified when it returns models that perform very well on held-out test sets during training but perform poorly at deployment time.
Contextual interference. Context plays a significant role in pattern recognition and visual reasoning (e.g., Bar (2004); Torralba & Sinha (2001); Rabinovich et al. (2007); Heitz & Koller (2008); Galleguillos & Belongie (2010)). The extent to which visual context is being used by deep models is still unclear. Unlike models, humans are very good at exploiting context when it is helpful and discard it when it causes ambiguity. In other words, deep models do not understand what is the foreground object and what constitutes the background5. Nagarajan et al. (2020) mention that ML models utilize features (e.g., image background) which are spuriously correlated with the label during training. This makes them fragile at the test time when statistics slightly differ. As we argue here, this is one of the main reasons why deep models are so vulnerable to geometric and adversarial perturbations. Geirhos et al. (2020) have studied this phenomenon under the “shortcut learning” terminology from a broader perspective.
Insights from human vision. CNNs turn out to be good models of human vision and can explain the first feed-forward sweep of information (See Kriegeskorte (2015) for a review). They, however, differ from human visual processing in several important ways. Current object recognition methods do not rely on segmentation, whereas figure-ground segmentation plays a significant role in human vision, in particular for the encoding of spatial relations between 3D object parts (Biederman, 1987; Serre, 2019). Some computer vision works, predating deep learning, have also shown that pre-segmenting the image before applying the recognition algorithms, improves the accuracy (Malisiewicz & Efros, 2007; Rabinovich et al., 2007; Rosenfeld & Weinshall, 2011). Unlike the human vision system, CNNs are hindered drastically in crowded scenes (e.g., Volokitin et al. (2017)). CNNs rely more on texture whereas humans pay more attention to shape (Geirhos et al., 2018a). Utilizing minimal recognizable images, Ullman et al. (2016) argued that the human visual system uses features and processes that are not used by current deep models.
5As an example, consider a model that is trained to classify camels vs. cows, with camels always shown in sandy backgrounds and cows shown against grassy backgrounds. Although such a model does well during training, it gets confused when presented with cows in sandy backgrounds at test time (Beery et al., 2018). See also Rosenfeld et al. (2018) for another example in the context of object detection
O ur
a na
ly si
s
O bj
ec tN
et p
ap er
< Recognizers by year >
13.86
31.19
ObjectNet Top-5 (box) ObjectNet Top-1 (box)
49.84
61.50
39.48
27.89
25-35% performance drop A cc
ur ac
y %
A cc
ur ac
y %
40-45% performance drop
Using our code
3 EXPERIMENTS AND RESULTS
3.1 ACCURACY AND ROBUSTNESS AGAINST NATURAL DISTRIBUTION SHIFTS
A critic of Barbu et al. (2019). Barbu et al.’s work is a great contribution to the field to answer how well object recognition models generalize to the real-world circumstances and to control for biases in data collection. It, however, suffers from a major shortcoming that is making no distinction between “object detection” and “object recognition”. This confusion brings along several concerns:
1. They use the term “object detector” to refer to “object recognition” models. Object detection and object recognition are two distinct, yet related, tasks. Each one has its own models, datasets, evaluation measures, and inductive biases. For example, as shown in Fig. 1, images in object recognition datasets (e.g., ImageNet) often contain a single object, usually from a closeup view, whereas scenes in object detection datasets (e.g., MS COCO (Lin et al., 2014), OpenImages (Kuznetsova et al., 2018)) usually have multiple objects. Objects in the detection datasets vary more in some parameters such as occlusion and size. For instance, there is a larger variation in object scale in detection datasets (Singh & Davis, 2018). This discussion also relates to the distinction between “scene understanding” and “object recognition”. To understand a complex scene, as humans we look around, fixate on individual objects to recognize them, and accumulate information over fixations to perform more complex tasks such as answering a question or describing an event. To avoid biases in recognition datasets (e.g., typical scales or object views), we propose to (additionally) use detection datasets to study object recognition. We will discuss this further in Section 4.
2. Instead of applying models to isolated objects, Barbu et al. apply them to cluttered scenes containing multiple objects. Unlike ImageNet where the majority of images include only a single object, ObjectNet images have multiple objects in them and are often more cluttered. Therefore, the drop in performance of models on ObjectNet can be merely due to the fact that pretrained models on ImageNet have been trained on individual objects.
3. In addition to top-1 accuracy, Barbu et al. also report top-5 accuracy. One might argue that this may suffice in dealing with scenes containing multiple objects. Top-5 accuracy was first introduced in Russakovsky et al. (2015) to remedy the issues with the top-1 accuracy. The latter can be overly stringent by penalizing predictions that appear in the image but do not correspond to the target label. Top-5 accuracy itself, however, has two shortcomings. First, a model can still be penalized if all of the five guesses exist in the image, but none is the image label. Both scores fall short in addressing the images with counter-intuitive labels (e.g., when non-salient objects are labeled; Appx. E). Second, on fine-grained classification tasks (ImageNet has several fine-grained classes e.g., dogs), allowing five
predictions can make certain class distinctions trivial (Shankar et al., 2020). For example, there are five turtles in the ImageNet class hierarchy (mud turtle, box turtle, loggerhead turtle, leatherback turtle, and terrapin) that are difficult to distinguish. A classifier may trick the score by generating all of these labels for a turtle image to ensure it predicts the correct label. Shankar et al. proposed to use multi-label accuracy as an alternative to top-5 score. Each image has a set of target labels (i.e., multi-label annotations). A prediction is marked correct if it corresponds to any of the target labels for that image. This score, however, may favor a model that generates correct labels but may confuse the locations over a model that is spatially more precise but misses some objects (See also Beyer et al. (2020)). Regardless, since multi-label annotations for ObjectNet are not available, we report both top-1 and top-5 scores when feeding isolated objects to models.
Bounding box annotation. The 113 object categories in the ObjectNet dataset, overlapped with the ImageNet, contain 18,574 images in total. On this subset, the average number of images per category is 164.4 (min=55, max=284). Fig. 8 in Appx. A shows the distribution of the number of images per category on this dataset (envelope and dish drying rack are the most and least frequent objects, respectively). We drew a bounding box around the object corresponding to the category label of each image. If there were multiple nearby objects from the same category (e.g., chairs around a table), we tried to include all of them in the bounding box. Some example scenes and their corresponding bounding boxes are given in Fig. 1. Appx. H shows more stats on ObjectNet. Object recognition results. We employ six widely-used state of the art deep neural networks including AlexNet (Krizhevsky et al., 2012), VGG-19 (Simonyan & Zisserman, 2014), GoogLeNet (Szegedy et al., 2015), ResNet-152 (He et al., 2016), Inception-v3 (Szegedy et al., 2016)6, and MNASNet (Tan et al., 2019). AlexNet, VGG-19, and ResNet-152 have also been used in the ObjectNet paper (Barbu et al., 2019). We use the PyTorch implementation of these models7. Since the code from the ObjectNet paper is unavailable (at the time of preparing this work), in addition to applying models to bounding boxes and plotting the results on top of the results from the ObjectNet paper, we also run our code on both the bounding boxes and the full images. This allows a fair comparison and helps mitigate possible inconsistency in data processing methods (e.g., different data normalization schemes or test time data augmentation such as rotation, scale, color jittering, cropping, etc.).
Fig. 2 shows an overlay of our results in Fig. 1 from the ObjectNet paper. As can be seen, applying models to the object bounding box instead of the entire scene improves the accuracy about 10-15%. Although the gap is narrower now, models still significantly underperform on ObjectNet than the ImageNet dataset. Using our code, the improvement going from full image to bounding boxes is around 20-30% across all tested models (the right panel in Fig. 2). Our results using the full image are lower than Barbu et al.’s results using the full image (possibly because we do not utilize data augmentation). This relative difference entails that applying their code to bounding boxes will likely improve the performance beyond 10% that we obtained here. Assuming 25% gain in performance on top of their best results when using boxes, will still not close the performance gap which indicates that ObjectNet remains a challenging dataset for testing object recognition models.
Breakdown of accuracy over the 113 categories is shown in Appx. B (Figs. 9 & 10 over isolated objects and Figs. 11 & 12 over the full image). Interestingly, in both cases, almost all models, except GoogLeNet on isolated objects and AlexNet on the full image, perform the best over the safety pin category. Inspecting the images from this class, we found that they have a single safety pin often held by a person (perhaps about the same distance from the camera thus similar scales). The same story is true about the banana class which is the second easiest category using the bounding boxes. This object becomes much harder to recognize when using the full image (26.88% vs. 70.3% using boxes) which highlights the benefit of applying models to isolated objects rather than scenes.
3.2 ACCURACY AND ROBUSTNESS AGAINST SYNTHETIC DISTRIBUTION SHIFTS
3.2.1 ROBUSTNESS AGAINST COMMON IMAGE CORRUPTIONS
Previous work has shown that ImageNet-trained CNNs generalize poorly over a wide range of image distortions (e.g., Hendrycks & Dietterich (2019); Azulay & Weiss (2019); Dodge & Karam (2017)). These works, however, have applied CNNs to the whole scene. Here, we
6Barbu et al. have used Inception-v4. 7https://pytorch.org/docs/stable/torchvision/models.html
ask whether applying the models to the bounding boxes can improve robustness against image distortions. Following Hendrycks & Dietterich (2019), we systematically test how model accuracy degrades if images are corrupted by 14 different types of distortions including Gaussian noise, shot noise, impulse noise, defocus blur, glass blur, motion blur, zoom blur, snow, frost, fog, brightness, contrast, elastic transform, and JPEG compression at 3 levels of corruption severity. Fig. 36 (Appx. F) shows sample images along with their distortions. Ten images from each of the 113 categories of ObjectNet (1130 images in total) were fed to three models including VGG-19, Inception-v3, and ResNet-152.
Aggregate results over the full image and the object bounding box (both resized to 224 × 224 pixels) are shown in Fig. 3. All three models are more robust when applied to the object bounding box than the full image at all corruption levels, using both top-1 and top-5 scores (left two panels). Among models, ResNet-152 performs better and is the most robust model. It is followed by the Inception-v3 model. For nearly all of the 113 object categories, using bounding boxes leads to higher robustness than using the full image (the third panel). Similarly, using bounding boxes results in higher robustness against all distortion types (the right-most panel). Across distortion types, shown in Figs. 37 & 38 (Appx. F), ResNet-152 consistently outperforms the other two models at all severity levels, followed by Inception-v3. It seems that models are hindered more by impulse noise, frost, zoom blur, and snow distortions. The top-1 accuracy at severity level 2 on these distortions is below 20%. Overall, we conclude that limiting the object area only to the bounding box leads not only to higher prediction accuracy but also to higher robustness against image distortions. Extrapolating this approach, can we improve robustness by shrinking the object region even further by using the segmentation masks? We will thoroughly investigate this question in the next subsections.
3.2.2 ROBUSTNESS AGAINST ADVERSARIAL PERTURBATIONS
Despite being very accurate, CNNs are highly vulnerable to adversarial inputs (Szegedy et al., 2013; Goodfellow et al., 2014). These inputs are crafted carefully and maliciously by adding small imperceptible perturbations to them (e.g., altering the value of a pixel up to 8 units under the `∞-norm; pixels in the range [0, 255]). Here we apply the ImageNet pretrained models to 1130 images that were selected above. The models are tested against the Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2014) at two perturbation budgets in the untargeted white-box setting.
Table. 1 shows the results. We find that models are more resilient against the FGSM attack when applied to the bounding box than the full image. While the input size is the same in both cases (224×224), the adversary has more opportunity to mislead the
classifier in the full image case since a larger fraction of pixels play an insignificant role in the decisions made by the network. This aligns with observations from the visualization tools (e.g., Selvaraju et al. (2017)) revealing that CNNs indeed rely only on a small subset of image pixels to elicit a decision. One might argue that the lower robustness on the full images could be due to training and test discrepancy (i.e., training models on single objects and applying them to the entire scene). To address this, in the next subsection we train and test models in the same condition.
3.3 THE INFLUENCE OF THE SURROUNDING CONTEXT ON ROBUSTNESS
Despite a large body of literature on whether and how much visual context benefits CNNs in terms of accuracy and robustness8, the matter has not been settled yet (e.g., Bar (2004); Torralba & Sinha (2001); Rabinovich et al. (2007); Rosenfeld et al. (2018); Heitz & Koller (2008); Divvala et al. (2009); Zhu et al. (2016); Xiao et al. (2020); Malisiewicz & Efros (2007)). To study how context surrounding an object impacts model accuracy and robustness in more detail, we conducted
two experiments. In the first one, we trained two CNNs (2 conv layers, each followed by a pooling layer and 2 final fc layers) on MNIST and Fashion MNIST datasets, for which it is easy to derive the foreground masks (Figs. 39 & 40; Appx. G). CNNs were trained on either the original clean images or the foreground objects placed on a white noise background. We then tested the models against the FGSM attack w/o background subtraction. With background subtraction, we essentially assume that the adversary has access only to the foreground object (i.e., effectively removing the perturbations that fall on the background). As results in Fig. 4 show, background subtraction improves the robustness substantially using both models and over both datasets.
To examine whether the above conclusion generalizes to more complex natural scenes, we ran a second experiment. First, we selected images from ten classes of the MS COCO dataset including chair, car, book, bottle, dinning table, umbrella, boat, motorcycle, sheep, and cow. Objects from these classes come with a segmentation mask (one object per image; 100 images per category; 1000 images in total). Around 32.7% of the image pixels fall inside the object bounding box and around 58.1% of the bounding box pixels fall inside the object mask. Fig. 5 shows a sample chair alongside its bounding box and its segmentation mask.
We then trained three ResNet-18 models (finetuned on ImageNet), one per each input type: 1) full image, 2) bounding box, and 3) segmented object (placed in a dark background). Models were trained on 70 images per category (700 in total) for 10 epochs and were then tested on the remaining 30 images per category. An attempt was made to tune the parameters to attain the best test accuracy in each case (e.g., by avoiding overfitting). The test accuracy9 of models in order are 66.9%, 78%, and 80.3%. One reason
behind lower prediction accuracy using boxes might be because multiple objects may fit inside
8Majority of such works are focused on model accuracy 9 Taori et al. (2020) argue that robustness scores should control for accuracy as more predictive models in
general are more robust. To avoid this issue we used models that have about the same standard accuracy.
the bounding box (e.g., for elongated objects such as broom). Model performance against FGSM and `∞ PGD-5 (Projected Gradient Descent by Madry et al. (2017)) adversarial attacks are shown in Fig. 5 (left panel). We observe that training models on segmented objects leads to higher adversarial robustness against both types of attacks. The improvement is more pronounced at higher perturbations. We also considered a condition in which we masked the perturbations that fall on the background, denoted as “Seg. Mask + FG” in the figure. We noticed even higher robustness against the attacks by removing the background perturbations. These results encourage using foreground detection as an effective adversarial defense.
The middle panel in Fig. 5 shows model robustness against noise corruptions (averaged over the 14 distortions used in Section 3.2.1). Here again, we find that using segmentation masks leads to higher robustness compared to the full image and object boxes. “Seg. Mask + FG” leads to the best robustness among the input types. While it might be hard to draw a general conclusion regarding the superiority of the segmentation masks over bounding boxes in object recognition accuracy, our investigation suggests that using masks leads to a significant boost in adversarial robustness with little or no drop in standard accuracy. Our results offer an upper bound in the utility of segmentation masks in robustness. More work is needed to incorporate this feat in CNNs (i.e., using attention).
3.3.1 ROBUSTNESS AGAINST GEOMETRIC TRANSFORMATIONS
We also tested the ResNet-18 model (i.e., trained over the full image, the bounding box, and the segmented object on ObjectNet; as above) against three geometric transformations including scaling, in-plane rotation, and horizontal translation. Fig. 6 shows the results over the 300 test images that were used in the previous subsection. We find that the model trained on segmentation masks is more robust than the other two models over all three geometric transformations, followed by the models trained on the object bounding boxes and the full image, in order.
3.4 QUALITATIVE INSPECTION OF OBJECTNET IMAGES AND ANNOTATIONS
During the annotation of ObjectNet images, we came across the following observations: a) Some objects look very different when they are in motion (e.g., the fan in row 4 of Fig. 34 in Appx. D), or when they are shadowed or occluded by other objects (e.g., the hammer in Fig. 34 row 4), b) Some object instances differ a lot from the typical instances in the same class (e.g., the helmet in Fig. 34 row 5; the orange in Fig. 33 row 5), c) Some objects can be recognized only through reading their captions (e.g., the pet food container in Fig. 33 row 2), d) Some images have wrong labels (e.g., the pillow in Fig. 33 row 2; the skirt in Fig. 33 row 1; the tray in Fig. 34 row 2; See also Appx. E), e) Some objects are extremely difficult for humans (e.g., the tennis racket in Fig. 34 row 4; the shovel in Fig. 33 row 4; the tray in Fig. 33 row 1), f) In many images, objects are occluded by hands holding them (e.g., the sock and the shovel in Fig. 33 row 4), g) Some objects are hard to recognize in dim light (e.g., the printer in Fig. 33 row 2), and h) Some categories are often confused with other categories in the same set. Example sets include {bath towel, bed sheet, full sized towel, dishrag or hand towel}, {sandal, dress shoe (men), running shoe}, {t-shirt, dress, sweater, suit jacket, skirt}, {ruler, spatula, pen, match}, {padlock, combination lock}, and {envelope, letter}. The left panel in Fig. 7 shows four easy (highly confident correct predictions) and four hard (highly confident misclassifications) for ResNet-152 over six ObjectNet categories. In terms of the difficulty level, easy(difficult) objects for models appear easy(difficult) to humans too. Also, our qualitative inspection shows that ObjectNet includes a large number of objects that can be recognized only after a careful examination (the right panel in Fig. 7). More examples are given in Appx. C.
4 TAKEAWAYS AND DISCUSSION
Our investigation reveals that deep models perform significantly better when applied to isolated objects rather than the entire scene. The reason behind this is two-fold. First, there is less variability in single objects compared to scenes containing multiple objects. Second, deep models (used here and also in ObjectNet paper) have been trained on ImageNet images which are less cluttered compared to the ObjectNet images. We anticipate that training models from scratch on large scale datasets that contain isolated objects will likely result in even higher accuracy. Assuming around 30% increase in performance (at best) on top of Barbu et al.’s results using bounding boxes still leaves a large gap of at least 15% between ImageNet and ObjectNet which means that ObjectNet is indeed much harder. It covers a wider range of variations than ImageNet including object instances, viewpoints, rotations, occlusions, etc which pushes the limits of object recognition models. Hence, despite its limitations and biases, ObjectNet dataset remains a great platform to test deep models in realistic situations.
We envision four research directions for the future work in this area. First, background subtraction is a promising mechanism and should be investigated further over large scale datasets (given the availability of high-resolution masks; e.g., MS COCO). We found that it improves robustness substantially over various types of image perturbations and attacks. Humans can discern the foreground object from the image background with high precision. This feat might be the key to robustness and hints towards an interplay and feedback loop between recognition and segmentation that is currently missing in CNNs. Second, measuring human performance on ObjectNet will provide a useful baseline for gauging model performance. Barbu et. al report an accuracy of around 95% when they asked subjects to mention the objects that are present in the scene. This task, however, is different from recognizing isolated objects similar to the regime that was considered here (i.e., akin to rapid scene categorization tasks; See Serre et al. (2007)). Besides, error patterns of models and humans (e.g., Borji & Itti (2014)), in addition to crude accuracy measures, will inform us about the differences in object recognition mechanisms between humans and machines. It could be that models work in a completely different fashion than the human visual system. Third, as discussed in Section 3.1, multi-label prediction accuracy is more appropriate for evaluating recognition models. Annotating all objects in ObjectNet images will thus provide an additional dimension to assess models. In this regard, we propose a new task where the goal is to recognize objects in their natural contexts. This task resembles (cropped) object recognition and object detection, but it is slightly different (i.e., the goal here is to recognize an object limited by a bounding box given all available information in the scene). This is essentially an argument against the recognition-detection dichotomy. Finally, it would be interesting to see how well the state of the art object detectors perform on the ObjectNet dataset (e.g., over overlapped classes between ObjectNet and MS COCO (Lin et al., 2014)). We expect a significant drop in detection performance since it is hard to recognize objects in this dataset.
From a broader perspective, our study reinforces the idea that there is more to scene understanding then merely learning statistical correlations. In particular, background subtraction and visual context are crucial in robust recognition and demand further investigation in future studies.
A FREQUENCY OF THE IMAGES PER CATEGORY
B MODEL ACCURACY PER CATEGORY USING BOXES VS. FULL IMAGE
C EASIEST AND HARDEST OBJECTS FOR THE RESNET-152 MODEL
D SOME CHALLENGING EXAMPLES FOR HUMANS
E ANNOTATION ISSUES IN OBJECT RECOGNITION DATASETS
F ANALYSING MODEL ROBUSTNESS OVER NATURALLY DISTORTED IMAGES
G ADVERSARIAL DEFENSE USING FOREGROUND DETECTION ON MNIST AND FASHION MNIST
H STATISTICS OF OBJECTNET DATASET | 1. What is the main contribution of the paper regarding ObjectNet dataset?
2. What are the strengths and weaknesses of the proposed approach in terms of experimental setup and conclusions?
3. How does the reviewer assess the quality and consistency of the terminology used in the paper, particularly in contrast to other works in the field?
4. Are there any concerns regarding the annotation process and its potential impact on model performance and robustness?
5. How does the reviewer evaluate the effectiveness of the analysis conducted in the paper, especially in relation to the research question and gaps between datasets? | Review | Review
This paper revisits ObjectNet dataset closely and found applying classifiers on object bounding box significantly reduces the gap between ImageNet and ObjectNet. The authors further investigates the robustness of CNNs against image perturbations and adversarial attacks, and found limiting the object area to their segmentation mask significantly improves model accuracy (and robustness). Qualitative evaluation is also performed over confident and less-confident / incorrect model predictions and find it correlates with human perception.
Pros:
More analyses like this paper does would help bridge the gap between ML/CV model performance in staged datasets and real-world scenarios
The experimentation conducted in this paper is comprehensive, accompanied with many in-depth inspection over a large dataset. The insights drawn from this paper would be invaluable for researchers working in this field.
Cons:
My major concern with this paper (and the main factor of rating it clear rejection) is the experimental setup used in section 3.3. From authors they "selected images from ten classes of the ObjectNet dataset ... manually annotated the object of interest in each image". Then "Models were trained on 70 images per category". (also from Figure 39 "In total we annotated 1K images across ten categories of the ObjectNet dataset."). If interpreted correctly, the models are trained on part of ObjectNet images which clearly violates dataset license "ObjectNet may never be used to tune the parameters of any model." (https://objectnet.dev/download.html).
While appreciating the authors conducting study on model robustness, the conclusion drawn from several experiments seems to confuse "performance" v.s. "robustness", where the former indicates model have better accuracy, and the latter measures how model accuracy varies with increasing noise / perturbations. See more details below.
Some (minor) comments:
Sec 3.1. 1) I share the concern from authors that "object detector" should be not confused with "object recognition" (or commonly used "image classifier"). Hopefully vision community could use more consistent terms across literatures.
Sec 3.1 1) While detection dataset would surely have more scale variation (and truncation / occlusion due to many objects are not in the center), it is not entirely clear that object in detection datasets "vary more in parameters such as lighting, ..., and blur".
Sec 3.1 1) It would be great to see more analysis on detection datasets (authors mentioned they will discuss in section 4, but only with very little analysis).
Sec 3.1, 2) While ImageNet and ObjectNet have distinct characteristics, having some stats on object size / spatial placement might better illustrate the gaps between these datasets.
Sec 3.1 3) Agree top-5 might make classifiers' life easy, but it is more of an eval metric rather than training loss (the model still need to predict top-1 class correct during training). Meanwhile, it is not very clear why multi-label annotation would bias against model that "is spatially more precise but misses some objects". That model should be evaluated against detection benchmarks, rather than object recognition (image classification) datasets.
Sec 3.1 "bounding box annotation". For multiple objects nearby with the same category, the annotation would include all of them in one bounding box. This might leads to bad aspect ratio? (in general, bounding box would also vary more in aspect ratio than images, and feeding bounding box into a square CNN seems to be less ideal)
Sec 3.1 "object recognition results". "AlexNet, VGG-19, and GoogLeNet have also been used", GoogLeNet should be "ResNet-152"? ObjectNet uses inception (GoogLeNet) v4 while authors use v3.
Sec 3.2.1 "The higher the prediction accuracy, the higher the robustness against natural distortions". This is not necessarily true. Looking at Figure 3, it seems all models and both image / bounding box schemes would have decayed performance w.r.t distortion severity, and their slopes seem similar.
Sec 3.3 "Despite a large body of literature on whether and how much visual context benefits CNNs, the majority being focused on model accuracy, the matter has not been settled yet". "the matter" means context for robustness? please clarify.
Sec 3.3 " Around 35.5% and 12.6% of the image pixels fall inside the object bounding box and the foreground object, respectively. Around 58.5% of the bounding box pixels fall inside the object mask.". here foreground object and object mask should be the same thing? if so, the numbers seem not matching (would expect box-to-image ratio * mask-to-box-ratio = mask-to-image ratio, or these numbers are normalized differently)?
Figure 5: it is clear that seg-mask actually isn't robust to adversarial attacks (accuracy dropped significantly), which contradicts with the claim from authors.
Sec 3.3 "An attempt was made to tune the parameters to attain the best test accuracy in each case", could authors elaborate?
Sec 3.3.1. here the results seem to indicate segmask is more robust (less variations). Would the segmentation mask itself be indicative for object categories? It might be interesting to predict object classes directly from those masks as a baseline.
Final review:
The authors updated the manuscript and removed tuning experiment on ObjectNet. I am still a bit concerned about the definition of "robustness", but the paper overall does look good for ICLR publication. |
ICLR | Title
Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits
Abstract
To explore the vulnerability of deep neural networks (DNNs), many attack paradigms have been well studied, such as the poisoning-based backdoor attack in the training stage and the adversarial attack in the inference stage. In this paper, we study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes. Specifically, our goal is to misclassify a specific sample into a target class without any sample modification, while not significantly reduce the prediction accuracy of other samples to ensure the stealthiness. To this end, we formulate this problem as a binary integer programming (BIP), since the parameters are stored as binary bits (i.e., 0 and 1) in the memory. By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem, which can be effectively and efficiently solved using the alternating direction method of multipliers (ADMM) method. Consequently, the flipped critical bits can be easily determined through optimization, rather than using a heuristic strategy. Extensive experiments demonstrate the superiority of our method in attacking DNNs. The code is available at: https://github.com/jiawangbai/TA-LBF.
1 INTRODUCTION
Due to the great success of deep neural networks (DNNs), its vulnerability (Szegedy et al., 2014; Gu et al., 2019) has attracted great attention, especially for security-critical applications (e.g., face recognition (Dong et al., 2019) and autonomous driving (Eykholt et al., 2018)). For example, backdoor attack (Saha et al., 2020; Xie et al., 2019) manipulates the behavior of the DNN model by mainly poisoning some training data in the training stage; adversarial attack (Goodfellow et al., 2015; Moosavi-Dezfooli et al., 2017) aims to fool the DNN model by adding malicious perturbations onto the input in the inference stage.
Compared to the backdoor attack and adversarial attack, a novel attack paradigm, dubbed weight attack (Breier et al., 2018), has been rarely studied. It assumes that the attacker has full access to the memory of a device, such that he/she can directly change the parameters of a deployed model to achieve some malicious purposes (e.g., crushing a fully functional DNN and converting it to a random output generator (Rakin et al., 2019)). Since weight attack neither modifies the input nor control the training process, both the service provider and the user are difficult to realize the existence of the attack. In practice, since the deployed DNN model is stored as binary bits in the memory, the attacker can modify the model parameters using some physical fault injection techniques, such as Row Hammer Attack (Agoyan et al., 2010; Selmke et al., 2015) and Laser Beam Attack (Kim et al., 2014). These techniques can precisely flip any bit of the data in the memory. Some previous works (Rakin et al., 2019; 2020a;b) have demonstrated that it is feasible to change the model weights via bit flipping to achieve some malicious purposes. However, the critical bits are identified mostly
†This work was done when Jiawang Bai was an intern at Tencent AI Lab. Correspondence to: Baoyuan Wu (wubaoyuan@cuhk.edu.cn) and Shu-Tao Xia (xiast@sz.tsinghua.edu.cn).
using some heuristic strategies in their methods. For example, Rakin et al. (2019) combined gradient ranking and progressive search to identify the critical bits for flipping.
This work also focuses on the bit-level weight attack against DNNs in the deployment stage, whereas with two different goals, including effectiveness and stealthiness. The effectiveness requires that the attacked model can misclassify a specific sample to a attacker-specified target class without any sample modification, while the stealthiness encourages that the prediction accuracy of other samples will not be significantly reduced. As shown in Fig. 1, to achieve these goals, we propose to identify and flip bits that are critical to the prediction of the specific sample but not significantly impact the prediction of other samples. Specifically, we treat each bit in the memory as a binary variable, and our task is to determine its state (i.e., 0 or 1). Accordingly, it can be formulated as a binary integer programming (BIP) problem. To further improve the stealthiness, we also limit the number of flipped bits, which can be formulated as a cardinality constraint. However, how to solve the BIP problem with a cardinality constraint is a challenging problem. Fortunately, inspired by an advanced optimization method, the `p-box ADMM (Wu & Ghanem, 2018), this problem can be reformulated as a continuous optimization problem, which can further be efficiently and effectively solved by the alternating direction method of multipliers (ADMM) (Glowinski & Marroco, 1975; Gabay & Mercier, 1976). Consequently, the flipped bits can be determined through optimization rather than the original heuristic strategy, which makes our attack more effective. Note that we also conduct attack against the quantized DNN models, following the setting in some related works (Rakin et al., 2019; 2020a). Extensive experiments demonstrate the superiority of the proposed method over several existing weight attacks. For example, our method achieves a 100% attack success rate with 7.37 bit-flips and 0.09% accuracy degradation of the rest unspecific inputs in attacking a 8-bit quantized ResNet-18 model on ImageNet. Moreover, we also demonstrate that the proposed method is also more resistant to existing defense methods.
The main contributions of this work are three-fold. 1) We explore a novel attack scenario where the attacker enforces a specific sample to be predicted as a target class by modifying the weights of a deployed model via bit flipping without any sample modification. 2) We formulate the attack as a BIP problem with the cardinality constraint and propose an effective and efficient method to solve this problem. 3) Extensive experiments verify the superiority of the proposed method against DNNs with or without defenses.
2 RELATED WORKS
Neural Network Weight Attack. How to perturb the weights of a trained DNN for malicious purposes received extensive attention (Liu et al., 2017a; 2018b; Hong et al., 2019). Liu et al. (2017a) firstly proposed two schemes to modify model parameters for misclassification without and with considering stealthiness, which is dubbed single bias attack (SBA) and gradient descent
attack (GDA) respectively. After that, Trojan attack (Liu et al., 2018b) was proposed, which injects malicious behavior to the DNN by generating a general trojan trigger and then retraining the model. This method requires to change lots of parameters. Recently, fault sneaking attack (FSA) (Zhao et al., 2019) was proposed, which aims to misclassify certain samples into a target class by modifying the DNN parameters with two constraints, including maintaining the classification accuracy of other samples and minimizing parameter modifications. Note that all those methods are designed to misclassify multiple samples instead of a specific sample, which may probably modify lots of parameters or degrade the accuracy of other samples sharply.
Bit-Flip based Attack. Recently, some physical fault injection techniques (Agoyan et al., 2010; Kim et al., 2014; Selmke et al., 2015) were proposed, which can be adopted to precisely flip any bit in the memory. Those techniques promote researchers to study how to modify model parameters at the bit-level. As a branch of weight attack, the bit-flip based attack was firstly explored in (Rakin et al., 2019). It proposed an untargeted attack that can convert the attacked DNN to a random output generator with several bit-flips. Besides, Rakin et al. (2020a) proposed the targeted bit Trojan (TBT) to inject the fault into DNNs by flipping some critical bits. Specifically, the attacker flips the identified bits to force the network to classify all samples embedded with a trigger to a certain target class, while the network operates with normal inference accuracy with benign samples. Most recently, Rakin et al. (2020b) proposed the targeted bit-flip attack (T-BFA), which achieves malicious purposes without modifying samples. Specifically, T-BFA can mislead samples from single source class or all classes to a target class by flipping the identified weight bits. It is worth noting that the above bit-flip based attacks leverage heuristic strategies to identify critical weight bits. How to find critical bits for the bit-flip based attack method is still an important open question.
3 TARGETED ATTACK WITH LIMITED BIT-FLIPS (TA-LBF)
3.1 PRELIMINARIES
Storage and Calculation of Quantized DNNs. Currently, it is a widely-used technique to quantize DNNs before deploying on devices for efficiency and reducing storage size. For each weight in l-th layer of a Q-bit quantized DNN, it will be represented and then stored as the signed integer in two’s complement representation (v = [vQ; vQ−1; ...; v1] ∈ {0, 1}Q) in the memory. Attacker can modify the weights of DNNs through flipping the stored binary bits. In this work, we adopt the layer-wise uniform weight quantization scheme similar to Tensor-RT (Migacz, 2017). Accordingly, each binary vector v can be converted to a real number by a function h(·), as follow:
h(v) = (−2Q−1 · vQ + Q−1∑ i=1 2i−1 · vi) ·∆l, (1)
where l indicates which layer the weight is from, ∆l > 0 is a known and stored constant which represents the step size of the l-th layer weight quantizer.
Notations. We denote a Q-bit quantized DNN-based classification model as f : X → Y , where X ∈ Rd being the input space and Y ∈ {1, 2, ...,K} being the K-class output space. Assuming that the last layer of this DNN model is a fully-connected layer with B ∈ {0, 1}K×C×Q being the quantized weights, where C is the dimension of last layer’s input. Let Bi,j ∈ {0, 1}Q be the two’s complement representation of a single weight and Bi ∈ {0, 1}C×Q denotes all the binary weights connected to the i-th output neuron. Given a test sample x with the ground-truth label s, f(x; Θ,B) ∈ [0, 1]K is the output probability vector and g(x; Θ) ∈ RC is the input of the last layer, where Θ denotes the model parameters without the last layer.
Attack Scenario. In this paper, we focus on the white-box bit-flip based attack, which was first introduced in (Rakin et al., 2019). Specifically, we assume that the attacker has full knowledge of the model (including it’s architecture, parameters, and parameters’ location in the memory), and can precisely flip any bit in the memory. Besides, we also assume that attackers can have access to a small portion of benign samples, but they can not tamper the training process and the training data.
Attacker’s Goals. Attackers have two main goals, including the effectiveness and the stealthiness. Specifically, effectiveness requires that the attacked model can misclassify a specific sample to a predefined target class without any sample modification, and the stealthiness requires that the prediction accuracy of other samples will not be significantly reduced.
3.2 THE PROPOSED METHOD
Loss for Ensuring Effectiveness. Recall that our first target is to force a specific image to be classified as the target class by modifying the model parameters at the bit-level. To this end, the most straightforward way is maximizing the logit of the target class while minimizing that of the source class. For a sample x, the logit of a class can be directly determined by the input of the last layer g(x; Θ) and weights connected to the node of that class. Accordingly, we can modify weights only connected to the source and target class to fulfill our purpose, as follows:
L1(x; Θ,B, B̂s, B̂t) = max ( m− p(x; Θ, B̂t) + δ, 0 ) + max ( p(x; Θ, B̂s)−m+ δ, 0 ) , (2)
where p(x; Θ, B̂i) = [h(B̂i,1);h(B̂i,2); ...;h(B̂i,C)]>g(x; Θ) denotes the logit of class i (i = s or i = t), h(·) is the function defined in Eq. (1), m = max
i∈{0,...,K}\{s} p(x; Θ,Bi), and δ ∈ R
indicates a slack variable, which will be specified in later experiments. The first term of L1 aims at increasing the logit of the target class, while the second term is to decrease the logit of the source class. The loss L1 is 0 only when the output on target class is more than m + δ and the output on source class is less than m − δ. That is, the prediction on x of the target model is the predefined target class. Note that B̂s, B̂t ∈ {0, 1}C×Q are two variables we want to optimize, corresponding to the weights of the fully-connected layer w.r.t. class s and t, respectively, in the target DNN model. B ∈ {0, 1}K×C×Q denotes the weights of the fully-connected layer of the original DNN model, and it is a constant tensor in L1. For clarity, hereafter we simplify L1(x; Θ,B, B̂s, B̂t) as L1(B̂s, B̂t), since x and Θ are also provided input and weights.
Loss for Ensuring Stealthiness. As we mentioned in Section 3.1, we assume that the attacker can get access to an auxiliary sample set {(xi, yi)}Ni=1. Accordingly, the stealthiness of the attack can be formulated as follows:
L2(B̂s, B̂t) = N∑ i=1 `(f(xi; Θ,B{1,...,K}\{s,t}, B̂s, B̂t), yi), (3)
where B{1,...,K}\{s,t} denotes {B1,B2, ...,BK}\{Bs,Bt}, and fj(xi; Θ,B{1,...,K}\{s,t}, B̂s, B̂t) indicates the posterior probability of xi w.r.t. class j, caclulated by Softmax(p(xi; Θ, B̂j)) or Softmax(p(xi; Θ,Bj)). `(·, ·) is specified by the cross entropy loss. To keep clarity, xi, Θ and B{1,...,K}\{s,t} are omitted in L2(B̂s, B̂t) . Besides, to better meet our goal, a straightforward additional approach is reducing the magnitude of the modification. In this paper, we constrain the number of bit-flips less than k. Physical bit flipping techniques can be time-consuming as discussed in (Van Der Veen et al., 2016; Zhao et al., 2019). Moreover, such techniques lead to abnormal behaviors in the attacked system (e.g., suspicious cache activity of processes), which may be detected by some physical detection-based defenses (Gruss et al., 2018). As such, minimizing the number of bit-flips is critical to make the attack more efficient and practical.
Overall Objective. In conclusion, the final objective function is as follows:
min B̂s,B̂t
L1(B̂s, B̂t) + λL2(B̂s, B̂t),
s.t. B̂s ∈ {0, 1}C×Q, B̂t ∈ {0, 1}C×Q, dH(Bs, B̂s) + dH(Bt, B̂t) ≤ k, (4)
where dH(·, ·) denotes the Hamming distance and λ > 0 is a trade-off parameter. For the sake of brevity, Bs and Bt are concatenated and further reshaped to the vector b ∈ {0, 1}2CQ. Similarly, B̂s and B̂t are concatenated and further reshaped to the vector b̂ ∈ {0, 1}2CQ. Besides, for binary vector b and b̂, there exists a nice relationship between Hamming distance and Euclidean distance: dH(b, b̂) = ||b− b̂||22. The new formulation of the objective is as follows:
min b̂
L1(b̂) + λL2(b̂), s.t. b̂ ∈ {0, 1}2CQ, ||b− b̂||22 − k ≤ 0. (5)
Problem (5) is denoted as TA-LBF (targeted attack with limited bit-flips). Note that TA-LBF is a binary integer programming (BIP) problem, whose optimization is challenging. We will introduce an effective and efficient method to solve it in the following section.
3.3 AN EFFECTIVE OPTIMIZATION METHOD FOR TA-LBF
To solve the challenging BIP problem (5), we adopt the generic solver for integer programming, dubbed `p-Box ADMM (Wu & Ghanem, 2018). The solver presents its superior performance in many tasks, e.g., model pruning (Li et al., 2019), clustering (Bibi et al., 2019), MAP inference (Wu et al., 2020a), adversarial attack (Fan et al., 2020), etc.. It proposed to replace the binary constraint equivalently by the intersection of two continuous constraints, as follows
b̂ ∈ {0, 1}2CQ ⇔ b̂ ∈ (Sb ∩ Sp), (6)
where Sb = [0, 1]2CQ indicates the box constraint, and Sp = {b̂ : ||b̂ − 12 || 2 2 = 2CQ 4 } denotes the `2-sphere constraint. Utilizing (6), Problem (5) is equivalently reformulated as
min b̂,u1∈Sb,u2∈Sp,u3∈R+
L1(b̂) + λL2(b̂), s.t. b̂ = u1, b̂ = u2, ||b− b̂||22 − k + u3 = 0, (7)
where two extra variables u1 and u2 are introduced to split the constraintsw.r.t. b̂. Besides, the nonnegative slack variable u3 ∈ R+ is used to transform ||b−b̂||22−k ≤ 0 in (5) into ||b−b̂||22−k+u3 = 0. The above constrained optimization problem can be efficiently solved by the alternating direction method of multipliers (ADMM) (Boyd et al., 2011).
Following the standard procedure of ADMM, we firstly present the augmented Lagrangian function of the above problem, as follows:
L(b̂,u1,u2, u3, z1, z2, z3) =L1(b̂) + λL2(b̂) + z>1 (b̂− u1) + z>2 (b̂− u2) +z3(||b− b̂||22 − k + u3) + c1(u1) + c2(u2) + c3(u3)
+ ρ1 2 ||b̂− u1||22 + ρ2 2 ||b̂− u2||22 + ρ3 2
(||b− b̂||22 − k + u3)2, (8)
where z1, z2 ∈ R2CQ and z3 ∈ R are dual variables, and ρ1, ρ2, ρ3 > 0 are penalty factors, which will be specified later. c1(u1) = I{u1∈Sb}, c2(u2) = I{u2∈Sp}, and c3(u3) = I{u3∈R+} capture the constraints Sb,Sp and R+, respectively. The indicator function I{a} = 0 if a is true; otherwise, I{a} = +∞. Based on the augmented Lagrangian function, the primary and dual variables are updated iteratively, with r indicating the iteration index.
Given (b̂r, zr1 , zr2 , zr3), update (u r+1 1 ,u r+1 2 , u r+1 3 ). Given (b̂r, zr1 , zr2 , zr3), (u1,u2, u3) are independent, and they can be optimized in parallel, as follows ur+11 = arg min u1∈Sb (zr1) >(b̂r − u1) + ρ12 ||b̂ r − u1||22 = PSb(b̂r + zr1 ρ1 ), ur+12 = arg min u2∈Sp (zr2) >(b̂r − u2) + ρ22 ||b̂ r − u2||22 = PSp(b̂r + zr2 ρ2 ), ur+13 = arg min u3∈R+ zr3(||b− b̂r||22 − k + u3) + ρ3 2 (||b− b̂ r||22 − k + u3)2
= PR+(−||b− b̂r||22 + k − zr3 ρ3 ),
(9)
where PSb(a) = min((1,max(0,a)) with a ∈ Rn is the projection onto the box constraint Sb; PSp(a) = √ n 2 ā ||a|| + 1 2 with ā = a − 1 2 indicates the projection onto the `2-sphere constraint Sp (Wu & Ghanem, 2018); PR+(a)=max(0, a) with a∈R indicates the projection onto R+.
Given (ur+11 ,u r+1 2 , u r+1 3 , z r 1 , z r 2 , z r 3), update b̂r+1. Although there is no closed-form solution to b̂r+1, it can be easily updated by the gradient descent method, as both L1(b̂) and L2(b̂) are differentiable w.r.t. b̂, as follows
b̂r+1 ← b̂r − η · ∂L(b̂,u r+1 1 ,u r+1 2 , u r+1 3 , z r 1 , z r 2 , z r 3)
∂b̂
∣∣∣ b̂=b̂r , (10)
where η > 0 denotes the step size. Note that we can run multiple steps of gradient descent in the above update. Both the number of steps and η will be specified in later experiments. Besides, due to the space limit, the detailed derivation of ∂L/∂b̂ will be presented in Appendix A.
Given (b̂r+1,ur+11 ,u r+1 2 , u r+1 3 ), update (z r+1 1 , z r+1 2 , z r+1 3 ). The dual variables are updated by the gradient ascent method, as follows zr+11 = z r 1 + ρ1(b̂ r+1 − ur+11 ), zr+12 = z r 2 + ρ2(b̂
r+1 − ur+12 ), zr+13 = z r 3 + ρ3(||b− b̂r+1||22 − k + ur+13 ).
(11)
Remarks. 1) Note that since (ur+11 ,u r+1 2 , u r+1 3 ) are updated in parallel, their updates belong to the same block. Thus, the above algorithm is a two-block ADMM algorithm. We provide the algorithm outline in Appendix B. 2) Except for the update of b̂r+1, all other updates are very simple and efficient. The computational cost of the whole algorithm will be analyzed in Appendix C. 3) Due to the inexact solution to b̂r+1 using gradient descent, the theoretical convergence of the whole ADMM algorithm cannot be guaranteed. However, as demonstrated in many previous works (Gol’shtein & Tret’yakov, 1979; Eckstein & Bertsekas, 1992; Boyd et al., 2011), the inexact two-block ADMM often shows good practical convergence, which is also the case in our later experiments. Besides, the numerical convergence analysis is presented in Appendix D. 4) The proper adjustment of (ρ1, ρ2, ρ3) could accelerate the practical convergence, which will be specified later .
4 EXPERIMENTS
4.1 EVALUATION SETUP
Settings. We compare our method (TA-LBF) with GDA (Liu et al., 2017a), FSA (Zhao et al., 2019), T-BFA (Rakin et al., 2020b), and TBT (Rakin et al., 2020a). All those methods can be adopted to misclassify a specific image into a target class. We also take the fine-tuning (FT) of the last fully-connected layer as a baseline method. We conduct experiments on CIFAR-10 (Krizhevsky et al., 2009) and ImageNet (Russakovsky et al., 2015). We randomly select 1,000 images from each dataset as the evaluation set for all methods. Specifically, for each of the 10 classes in CIFAR-10, we perform attacks on the 100 randomly selected validation images from the other 9 classes. For ImageNet, we randomly choose 50 target classes. For each target class, we perform attacks on 20 images randomly selected from the rest classes in the validation set. Besides, for all methods except GDA which does not employ auxiliary samples, we provide 128 and 512 auxiliary samples on CIFAR-10 and ImageNet, respectively. Following the setting in (Rakin et al., 2020a;b), we adopt the quantized ResNet (He et al., 2016) and VGG (Simonyan & Zisserman, 2015) as the target models. For our TA-LBF, the trade-off parameter λ and the constraint parameter k affect the attack stealthiness and the attack success rate. We adopt a strategy for jointly searching λ and k, which is specified in Appendix E.3. More descriptions of our settings are provided in Appendix E.
Evaluation Metrics. We adopt three metrics to evaluate the attack performance, i.e., the post attack accuracy (PA-ACC), the attack success rate (ASR), and the number of bit-flips (Nflip). PA-ACC denotes the post attack accuracy on the validation set except for the specific attacked sample and the auxiliary samples. ASR is defined as the ratio of attacked samples that are successfully attacked into the target class among all 1,000 attacked samples. Nflip is the number of bit-flips required for an attack. A better attack performance corresponds to a higher PA-ACC and ASR, while a lower Nflip. Besides, we also show the accuracy of the original model, denoted as ACC.
4.2 MAIN RESULTS
Results on CIFAR-10. The results of all methods on CIFAR-10 are shown in Table 1. Our method achieves a 100% ASR with the fewest Nflip for all the bit-widths and architectures. FT modifies the maximum number of bits among all methods since there is no limitation of parameter modifications. Due to the absence of the training data, the PA-ACC of FT is also poor. These results indicate that fine-tuning the trained DNN as an attack method is infeasible. Although T-BFA flips the secondfewest bits under three cases, it fails to achieve a higher ASR than GDA and FSA. In terms of PA-ACC, TA-LBF is comparable to other methods. Note that the PA-ACC of TA-LBF significantly outperforms that of GDA, which is the most competitive w.r.t. ASR and Nflip among all the baseline methods. The PA-ACC of GDA is relatively poor, because it does not employ auxiliary samples. Achieving the highest ASR, the lowest Nflip, and the comparable PA-ACC demonstrates that our optimization-based method is more superior than other heuristic methods (TBT, T-BFA and GDA).
Results on ImageNet. The results on ImageNet are shown in Table 1. It can be observed that GDA shows very competitive performance compared to other methods. However, our method obtains the highest PA-ACC, the fewest bit-flips (less than 8), and a 100% ASR in attacking ResNet. For VGG, our method also achieves a 100% ASR with the fewest Nflip for both bit-widths. The Nflip results of our method are mainly attributed to the cardinality constraint on the number of bit-flips. Moreover, for our method, the average PA-ACC degradation over four cases on ImageNet is only 0.06%, which demonstrates the stealthiness of our attack. When comparing the results of ResNet and VGG, an interesting observation is that all methods require significantly more bit-flips for VGG. One reason is that VGG is much wider than ResNet. Similar to the claim in (He et al., 2020), increasing the network width contributes to the robustness against the bit-flip based attack.
4.3 RESISTANCE TO DEFENSE METHODS
Resistance to Piece-wise Clustering. He et al. (2020) proposed a novel training technique, called piece-wise clustering, to enhance the network robustness against the bit-flip based attack. Such a training technique introduces an additional weight penalty to the inference loss, which has the effect of eliminating close-to-zero weights (He et al., 2020). We test the resistance of all attack methods to the piece-wise clustering. We conduct experiments with the 8-bit quantized ResNet on CIFAR-10 and ImageNet. Following the ideal configuration in (He et al., 2020), the clustering coefficient, which is a hyper-parameter of piece-wise clustering, is set to 0.001 in our evaluation. For our method, the initial k is set to 50 on ImageNet and the rest settings are the same as those in Section 4.1. Besides the three metrics in Section 4.1, we also present the number of increased Nflip compared to the model without defense (i.e., results in Table 1), denoted as ∆Nflip.
The results of the resistance to the piece-wise clustering of all attack methods are shown in Table 2. It shows that the model trained with piece-wise clustering can improve the number of required bit-flips for all attack methods. However, our method still achieves a 100% ASR with the least number of bit-flips on both two datasets. Although TBT achieves a smaller ∆Nflip than ours on CIFAR-10, its ASR is only 52.3%, which also verifies the defense effectiveness of the piece-wise clustering. Compared with other methods, TA-LBF achieves the fewest ∆Nflip on ImageNet and the best PA-ACC on both datasets. These results demonstrate the superiority of our method over other methods when attacking models trained with piece-wise clustering.
Resistance to Larger Model Capacity. Previous studies (He et al., 2020; Rakin et al., 2020b) observed that increasing the network capacity can improve the robustness against the bit-flip based attack. Accordingly, we evaluate all attack methods against the models with a larger capacity using the 8-bit quantized ResNet on both datasets. Similar to the strategy in (He et al., 2020), we increase the model capacity by varying the network width (i.e., 2× width in our experiments). All settings of our method are the same as those used in Section 4.1.
The results are presented in Table 2. We observe that all methods require more bit-flips to attack the model with the 2× width. To some extent, it demonstrates that the wider network with the same architecture is more robust against the bit-flip based attack. However, our method still achieves a 100% ASR with the fewest Nflip and ∆Nflip. Moreover, when comparing the two defense methods, we find that piece-wise clustering performs better than the model with a larger capacity in terms of ∆Nflip. However, piece-wise clustering training also causes the accuracy decrease of the original model (e.g., from 92.16% to 91.01% on CIFAR-10). We provide more results in attacking models with defense under different settings in Appendix F.
4.4 ABLATION STUDY
We perform ablation studies on parameters λ and k, and the number of auxiliary samplesN . We use the 8-bit quantized ResNet on CIFAR-10 as the representative for analysis. We discuss the attack performance of TA-LBF under different values of λ while k is fixed at 20, and under different values of k while λ is fixed at 10. To analyze the effect ofN , we configureN from 25 to 800 and keep other settings the same as those in Section 4.1. The results are presented in Fig. 2. We observe that our method achieves a 100% ASR when λ is less than 20. As expected, the PA-ACC increases while the ASR decreases along with the increase of λ. The plot of parameter k presents that k can exactly limit the number of bit-flips, while other attack methods do not involve such constraint. This advantage is critical since it allows the attacker to identify limited bits to perform an attack when the budget is fixed. As shown in the figure, the number of auxiliary samples less than 200 has a marked positive impact on the PA-ACC. It’s intuitive that more auxiliary samples can lead to a better PA-ACC. The observation also indicates that TA-LBF still works well without too many auxiliary samples.
4.5 VISUALIZATION OF DECISION BOUNDARY
To further compare FSA and GDA with our method, we visualize the decision boundaries of the original and the post attack models in Fig. 3. We adopt a four-layer Multi-Layer Perceptron trained with the simulated 2-D Blob dataset from 4 classes. The original decision boundary indicates that the original model classifies all data points almost perfectly. The attacked sample is classified into Class 3 by all methods. Visually, GDA modifies the decision boundary drastically, especially for Class 0. However, our method modifies the decision boundary mainly around the attacked sample. Althoug FSA is comparable to ours visually in Fig. 3, it flips 10× bits than GDA and TA-LBF. In terms of the numerical results, TA-LBF achieves the best PA-ACC and the fewest Nflip. This finding verifies that our method can achieve a successful attack even only tweaking the original classifier.
5 CONCLUSION
In this work, we have presented a novel attack paradigm that the weights of a deployed DNN can be slightly changed via bit flipping in the memory, to give a target prediction for a specific sample, while the predictions on other samples are not significantly influenced. Since the weights are stored as binary bits in the memory, we formulate this attack as a binary integer programming (BIP) problem, which can be effectively and efficiently solved by a continuous algorithm. Since the critical bits are determined through optimization, the proposed method can achieve the attack goals by flipping a few bits, and it shows very good performance under different experimental settings.
ACKNOWLEDGMENTS
This work is supported in part by the National Key Research and Development Program of China under Grant 2018YFB1800204, the National Natural Science Foundation of China under Grant 61771273, the R&D Program of Shenzhen under Grant JCYJ20180508152204044. Baoyuan Wu is supported by the Natural Science Foundation of China under grant No. 62076213, and the university development fund of the Chinese University of Hong Kong, Shenzhen under grant No. 01001810.
B ALGORITHM OUTLINE
Algorithm 1 Continuous optimization for the BIP problem (5). Input: The original quantized DNN model f with weights Θ,B, attacked sample x with groundtruth label s, target class t, auxiliary sample set {(xi, yi)}Ni=1, hyper-parameters λ, k, and δ. Output: b̂.
1: Initial u01, u 0 2, u 0 3, z 0 1 , z 0 2 , z 0 3 , b̂ 0 and let r ← 0; 2: while not converged do 3: Update ur+11 , u r+1 2 and u r+1 3 as Eq. (9); 4: Update b̂r+1 as Eq. (10); 5: Update zr+11 , z r+1 2 and z r+1 3 as Eq. (11); 6: r ← r + 1. 7: end while
C COMPLEXITY ANALYSIS
The computational complexity of the proposed algorithm (i.e., Algorithm 1) consists of two parts, the forward and backward pass. In terms of the forward pass, since Θ,B{1,...,K}\{s,t} are fixed during the optimization, their involved terms, including g(x; Θ) and p(x; Θ,Bi)|i 6=s,t, are calculated only one time. The main cost from B̂s and B̂t is O(2(N + 1)C2Q) per iteration, as there are N + 1 samples. In terms of the backward pass, the main cost is from the update of b̂r+1, which is O(2(N + 1)CQ) per iteration in the gradient descent. Since all other updates are very simple, their costs are omitted here. Thus, the overall computational cost is O ( Touter[2(N + 1)CQ · (C + Tinner)] ) , with Touter being the iteration of the overall algorithm and Tinner indicating the number of gradient steps in updating b̂r+1. As shown in Section D, the proposed method TA-LBF always converges very fast in our experiments, thus Touter is not very large. As demonstrated in Section E.3, Tinner is set to 5 in our experiments. In short, the proposed method can be optimized very efficiently.
Besides, we also compare the computational complexity of different attacks empirically. Specifically, we compare the running time of attacking one image of different methods against the 8-bit quantized ResNet on CIFAR-10 and ImageNet dataset. As shown in Table 3, TBT is the most timeconsuming method among all attacks. Although the proposed TA-LBF is not superior to T-BFA, FSA, and GDA in running time, this gap can be tolerated when attacking a single image in the deployment stage. Besides, our method performs better in terms of PA-ACC, ASR, and Nflip as demonstrated in our experiments.
D NUMERICAL CONVERGENCE ANALYSIS
We present the numerical convergence of TA-LBF in Fig. 4. Note that ||b̂ − u1||22 and ||b̂ − u2||22 characterize the degree of satisfaction of the box and `2-sphere constraint, respectively. For the two examples of CIFAR-10 and ImageNet, the values of both indicators first increase, then drop, and finally close to 0. Another interesting observation is that L1 + λL2 first decreases evidently and then increases slightly. Such findings illustrate the optimization process of TA-LBF. In the early iterations, modifying the model parameters tends to achieve the two goals mentioned in Section 3.1; in the late iterations, b̂ is encouraged to satisfy the box and l2-sphere constraint. We also observe that both examples stop when meeting ||b̂ − u1||22 ≤ 10−4 and ||b̂ − u2||22 ≤ 10−4 and do not
exceed the maximum number of iterations (i.e., 2000). The numerical results demonstrate the fast convergence of our method in practice.
E EVALUATION SETUP
E.1 BASELINE METHODS
Since GDA (Liu et al., 2017a) and FSA (Zhao et al., 2019) are originally designed for attacking the full-precision network, we adapt these two methods to attack the quantized network by applying quantization-aware training (Jacob et al., 2018). We adopt the `0-norm for FSA (Liu et al., 2017a) and modification compression for GDA (Zhao et al., 2019) to reduce the number of the modified parameters. Among three types of T-BFA (Rakin et al., 2020b), we compare to the most comparable method: the 1-to-1 stealthy attack scheme. The purpose of this attack scheme is to misclassify samples of a single source class into the target class while maintaining the prediction accuracy of other samples. Besides, we take the fine-tuning (FT) of the last fully-connected layer as a basic attack and present its results. We perform attack once for each selected image except TBT (Rakin et al., 2020a) and totally 1,000 attacks on each dataset. The attack objective of TBT is that the attacked DNN model misclassifies all inputs with a trigger to a certain target class. Due to such objective, the number of attacks for TBT is equal to the number of target classes (i.e., 10 attacks on CIFAR-10 and 50 attacks on ImageNet).
E.2 TARGET MODELS
According to the setting in (Rakin et al., 2020a;b), we adopt two popular network architectures: ResNet (He et al., 2016) and VGG (Simonyan & Zisserman, 2015) for evaluation. On CIFAR-10, we perform experiments on ResNet-20 and VGG-16. On ImageNet, we use the pre-trained ResNet18* and VGG-16† network. We quantize all networks to the 4-bit and 8-bit quantization level using the layer-wise uniform weight quantization scheme, which is similar to the one involved in the Tensor-RT solution (Migacz, 2017).
E.3 PARAMETER SETTINGS OF TA-LBF
For each attack, we adopt a strategy for jointly searching λ and k. Specifically, for an initially given k, we search λ from a relatively large initial value and divide it by 2 if the attack does not succeed. The maximum search times of λ for a fixed k is set to 8. If it exceeds the maximum search times,
*Downloaded from https://download.pytorch.org/models/resnet18-5c106cde.pth †Downloaded from https://download.pytorch.org/models/vgg16_bn-6c64b313.pth
we double k and search λ from the relatively large initial value. The maximum search times of k is set to 4. On CIFAR-10, the initial k and λ are set to 5 and 100. On ImageNet, λ is initialized as 104; k is initialized as 5 and 50 for ResNet and VGG, respectively. On CIFAR-10, the δ in L1 is set to 10. On ImageNet, the δ is set to 3 and increased to 10 if the attack fails. u1 and u2 are initialized as b and u3 is initialized as 0. z1 and z2 are initialized as 0 and z3 is initialized as 0. b̂ is initialized as b. During each iteration, the number of gradient steps for updating b̂ is 5 and the step size is set to 0.01 on both datasets. Hyper-parameters (ρ1, ρ2, ρ3) (see Eq. (11)) are initialized as (10−4, 10−4, 10−5) on both datasets, and increase by ρi ← ρi×1.01, i = 1, 2, 3 after each iteration. The maximum values of (ρ1, ρ2, ρ3) are set to (50, 50, 5) on both datasets. Besides the maximum number of iterations (i.e., 2000), we also set another stopping criterion, i.e., ||b̂−u1||22 ≤ 10−4 and ||b̂− u2||22 ≤ 10−4.
F MORE RESULTS ON RESISTANCE TO DEFENSE METHODS
F.1 RESISTANCE TO PIECE-WISE CLUSTERING
We conduct experiments using the 8-bit quantized ResNet on CIFAR-10 with different clustering coefficients. We set the maximum search times of k to 5 for clustering coefficient 0.005 and 0.01 and keep the rest settings the same as those in Section 4.1. The results are presented in Table 4. As shown in the table, all values of Nflip are larger than attacking models without defense for all methods, which is similar to Table 2. Our method achieves a 100% ASR with the fewest Nflip under the three clustering coefficients. Although TBT obtains a smaller ∆Nflip than our method, it fails to achieve a satisfactory ASR. For example, TBT achieves only a 10.1% ASR when the clustering coefficient is set to 0.01. We observe that for all clustering coefficients, piece-wise clustering reduces the original accuracy. Such a phenomenon is more significant as the clustering coefficient increases. The results also show that there is no guarantee that if the clustering coefficient is larger (e.g., 0.01), the model is more robust, which is consistent with the finding in (He et al., 2020).
F.2 RESISTANCE TO LARGER MODEL CAPACITY
Besides the results of networks with a 2× width shown in Section 4.3, we also evaluate all methods against models with a 3× and 4× width. All settings are the same as those used in Section 4.1. The results are provided in Table 5. Among all attack methods, our method is least affected by increasing the network width. Especially for the network with a 4× width, our ∆Nflip is only 2.80. The results demonstrate the superiority of the formulated BIP problem and optimization. Moreover, compared with piece-wise clustering, having a larger model capacity can improve the original accuracy, but increases the model size and the computation complexity.
G DISCUSSIONS
G.1 COMPARING BACKDOOR, ADVERSARIAL, AND WEIGHT ATTACK
An attacker can achieve malicious purposes utilizing backdoor, adversarial, and weight attacks. In this section, we emphasize the differences among them.
Backdoor attack happens in the training stage and requires that the attacker can tamper the training data even the training process (Liu et al., 2020b; Li et al., 2020). Through poisoning some training samples with a trigger, the attacker can control the behavior of the attacked DNN in the inference stage. For example, images with reflections are misclassified into a target class, while benign images are classified normally (Liu et al., 2020a). However, such an attack paradigm causes the accuracy degradation on benign samples, which makes it detectable for users. Besides, these methods also require to modify samples in the inference stage, which is sometimes impossible for the attacker. Many defense methods against backdoor attack have been proposed, such as the preprocessingbased defense (Liu et al., 2017b), the model reconstruction-based defense (Liu et al., 2018a), and the trigger synthesis-based defense (Wang et al., 2019).
Adversarial attack modifies samples in the inference stage by adding small perturbations that remain imperceptible to the human vision system (Akhtar & Mian, 2018). Since adversarial attack only modifies inputs while keeping the model unchanged, it has no effect on the benign samples. Besides the basic white-box attack, the black-box attack (Wu et al., 2020b; Chen et al., 2020) and universal attack (Zhang et al., 2020b;a) have attracted wide attention. Inspired by its success in the classification, it also has been extended to other tasks, including image captioning (Xu et al., 2019), retrieval (Bai et al., 2020; Feng et al., 2020), etc.. Similarly, recent studies have demonstrated many defense methods against adversarial attack, including the preprocessing-based defense (Xie et al., 2018), the detection-based defense (Xu et al., 2017), and the adversarial learning-based defense (Carmon et al., 2019; Wu et al., 2020c).
Weight attack modifies model parameters in the deployment stage, which is the studied paradigm in this work. Weight attack generally aims at misleading the DNN model on the selected sample(s), while having a minor effect on other samples (Zhao et al., 2019; Rakin et al., 2020b). Many studies (Yao et al., 2020; Breier et al., 2018; Pan, 2020) have demonstrated that the DNN parameters can be modified in the bit-level in memory using fault injection techniques (Agoyan et al., 2010; Kim et al., 2014; Selmke et al., 2015) in practice. Note that the defense methods against weight attack have been not well studied. Although some defense methods (He et al., 2020) were proposed, they cannot achieve satisfactory performance. For example, our method can still achieve a 100% attack success rate against two proposed defense methods. Our work would encourage further investigation on the security of the model parameters from both attack and defense sides.
G.2 COMPARING TA-LBF WITH OTHER WEIGHT ATTACKS
We compare our TA-LBF with other weight attack methods, including TBT (Rakin et al., 2020a), TBFA (Rakin et al., 2020b), GDA (Liu et al., 2017a), and FSA (Zhao et al., 2019) in this section. TBT tampers both the test sample and the model parameters. Specifically, it first locates critical bits and generates a trigger, and then flips these bits to classify all inputs embedded with the trigger to a target class. However, the malicious samples are easily detected by human inspection or many detection methods (Tran et al., 2018; Du et al., 2020). We do not modify the samples to perform TA-LBF, which makes the attack more stealthy. Rakin et al. (2020b) proposed T-BFA which misclassifies all samples (N-to-1 version) or samples from a source class (1-to-1 version) into a target class. Our method aims at misclassifying a specific sample, which meets the attacker’s requirement in some scenarios. For example, the attacker wants to manipulate the behavior of a face recognition engine on a specific input. Since it affects multiple samples, T-BFA maybe not stealthy enough in attacking real-world applications. GDA (Liu et al., 2017a) and FSA (Zhao et al., 2019) modify model parameters at the weight-level rather than bit-level. They are designed for misclassifying multiple samples from arbitrary classes, which makes it infeasible for them to only modify the parameters connected to the source and target class. They modify more parameters than our method as shown in the experiments, it might be due to the reason discussed above. Besides, TBT, T-BFA, and GDA determine the critical weights to modify using heuristic strategies, while our TA-LBF adopts optimization-based methods. Although FSA applies ADMM for solving the optimization problem, it has no explicit constraint to control the number of modified parameters, which makes it intends to modify more parameters than GDA and our TA-LBF.
H TRADE-OFF BETWEEN THREE EVALUATION METRICS
In this section, we investigate the trade-off between three adopted evaluation metrics (i.e., PA-ACC, ASR, and Nflip) for our attack. All experiments are conducted on CIFAR-10 and ImageNet dataset in attacking the 8-bit quantized ResNet.
We firstly discuss the trade-off between PA-ACC and Nflip by fixing the ASR as 100% using the search strategy in Appendix E.3 and adjusting the initial λ and k to obtain different attack results. The two curves on the left show that increasing the Nflip can improve the PA-ACC when Nflip is relatively small; the PA-ACC decreases with the increase of Nflip when Nflip is greater than a threshold. This phenomenon demonstrates that constraining the number of bit-flips is essential to ensure the attack stealthiness, as mentioned in Section 3.2. To study the trade-off between PA-ACC
and ASR, we fix the parameter k as 10 for approximately 10 bit-flips and adjust the parameter λ to obtain different PA-ACC and ASR results. The trade-off curves between PA-ACC and ASR show that increasing ASR can decrease the PA-ACC significantly. Therefore, how to achieve high ASR and PA-ACC simultaneously is still an important open problem. | 1. What is the main contribution of the paper regarding optimization-based algorithms?
2. What are the strengths and weaknesses of the proposed method in terms of technical contributions and hyperparameter tuning?
3. How does the reviewer assess the complexity of the proposed method, and what would they like to see improved in this regard?
4. What are some small issues with the paper, such as referencing ADMM?
5. What are the strong points of the paper, particularly regarding its distinction from other SOTA methods?
6. Are there any errors in the paper, such as in equation (6)? | Review | Review
The paper proposes an optimization-based algorithm for bit-flipping a limited number of bits in a quantized / binarized deep-learning model, so that the prediction on a target input example is flipped while the prediction on the other examples is as untouched as possible. The problem is formulated as a binary integer programming (BIP) problem, which is then solved using a recent ADMM-based technique. Experiments CIFAR-10 and ImageNet show that the proposed method outperforms the SOTA.
Weak points:
The main shortcoming of this paper is the limitedness of the technical contributions.
The hyper-parameter tuning is not clearly outlined / explained. This is problematic since there are quite a number of hyper-parameters. For example, I can count 4 hyper-parameters in equation (9) alone (including ADMM stepsizes rho_i).
It would be nice to have a back-of-envelop estimation of the complexity (running time, number of flops, etc.) of the proposed method, as a function of the maximum number of bits to flip (say).
Small issues:
S. Boyd and co-workers have done a great job in popularizing ADMM. However, this method has been around at least since the 70s. Key papers to reference when talking about ADMM include:
Glowinski and Marroco (1975) "Sur l'approximation, par éléments finis d'ordre un, et la résolution par pénalisation-dualité d'une classe de problèmes de Dirichlet non linéaires dualite d'une classe de problemes de Dirichlet non linéaires"
Gabay and Mercier (1976) "A dual algorithm for the solution of nonlinear variational problems via finite element approximation"
Strong points:
The strongest point in favor of this paper is that unlike the SOTA methods, the proposed method only flips to a very limited number of bits in the binarized DNN model, while achieving the same or higher accuracy.
The experiments are very detailed and well-presented.
Errors:
The equivalence in (6) doesn't seem to make sense. In the definition of
S
p
,
b
^
is an element of what ? |
ICLR | Title
Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits
Abstract
To explore the vulnerability of deep neural networks (DNNs), many attack paradigms have been well studied, such as the poisoning-based backdoor attack in the training stage and the adversarial attack in the inference stage. In this paper, we study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes. Specifically, our goal is to misclassify a specific sample into a target class without any sample modification, while not significantly reduce the prediction accuracy of other samples to ensure the stealthiness. To this end, we formulate this problem as a binary integer programming (BIP), since the parameters are stored as binary bits (i.e., 0 and 1) in the memory. By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem, which can be effectively and efficiently solved using the alternating direction method of multipliers (ADMM) method. Consequently, the flipped critical bits can be easily determined through optimization, rather than using a heuristic strategy. Extensive experiments demonstrate the superiority of our method in attacking DNNs. The code is available at: https://github.com/jiawangbai/TA-LBF.
1 INTRODUCTION
Due to the great success of deep neural networks (DNNs), its vulnerability (Szegedy et al., 2014; Gu et al., 2019) has attracted great attention, especially for security-critical applications (e.g., face recognition (Dong et al., 2019) and autonomous driving (Eykholt et al., 2018)). For example, backdoor attack (Saha et al., 2020; Xie et al., 2019) manipulates the behavior of the DNN model by mainly poisoning some training data in the training stage; adversarial attack (Goodfellow et al., 2015; Moosavi-Dezfooli et al., 2017) aims to fool the DNN model by adding malicious perturbations onto the input in the inference stage.
Compared to the backdoor attack and adversarial attack, a novel attack paradigm, dubbed weight attack (Breier et al., 2018), has been rarely studied. It assumes that the attacker has full access to the memory of a device, such that he/she can directly change the parameters of a deployed model to achieve some malicious purposes (e.g., crushing a fully functional DNN and converting it to a random output generator (Rakin et al., 2019)). Since weight attack neither modifies the input nor control the training process, both the service provider and the user are difficult to realize the existence of the attack. In practice, since the deployed DNN model is stored as binary bits in the memory, the attacker can modify the model parameters using some physical fault injection techniques, such as Row Hammer Attack (Agoyan et al., 2010; Selmke et al., 2015) and Laser Beam Attack (Kim et al., 2014). These techniques can precisely flip any bit of the data in the memory. Some previous works (Rakin et al., 2019; 2020a;b) have demonstrated that it is feasible to change the model weights via bit flipping to achieve some malicious purposes. However, the critical bits are identified mostly
†This work was done when Jiawang Bai was an intern at Tencent AI Lab. Correspondence to: Baoyuan Wu (wubaoyuan@cuhk.edu.cn) and Shu-Tao Xia (xiast@sz.tsinghua.edu.cn).
using some heuristic strategies in their methods. For example, Rakin et al. (2019) combined gradient ranking and progressive search to identify the critical bits for flipping.
This work also focuses on the bit-level weight attack against DNNs in the deployment stage, whereas with two different goals, including effectiveness and stealthiness. The effectiveness requires that the attacked model can misclassify a specific sample to a attacker-specified target class without any sample modification, while the stealthiness encourages that the prediction accuracy of other samples will not be significantly reduced. As shown in Fig. 1, to achieve these goals, we propose to identify and flip bits that are critical to the prediction of the specific sample but not significantly impact the prediction of other samples. Specifically, we treat each bit in the memory as a binary variable, and our task is to determine its state (i.e., 0 or 1). Accordingly, it can be formulated as a binary integer programming (BIP) problem. To further improve the stealthiness, we also limit the number of flipped bits, which can be formulated as a cardinality constraint. However, how to solve the BIP problem with a cardinality constraint is a challenging problem. Fortunately, inspired by an advanced optimization method, the `p-box ADMM (Wu & Ghanem, 2018), this problem can be reformulated as a continuous optimization problem, which can further be efficiently and effectively solved by the alternating direction method of multipliers (ADMM) (Glowinski & Marroco, 1975; Gabay & Mercier, 1976). Consequently, the flipped bits can be determined through optimization rather than the original heuristic strategy, which makes our attack more effective. Note that we also conduct attack against the quantized DNN models, following the setting in some related works (Rakin et al., 2019; 2020a). Extensive experiments demonstrate the superiority of the proposed method over several existing weight attacks. For example, our method achieves a 100% attack success rate with 7.37 bit-flips and 0.09% accuracy degradation of the rest unspecific inputs in attacking a 8-bit quantized ResNet-18 model on ImageNet. Moreover, we also demonstrate that the proposed method is also more resistant to existing defense methods.
The main contributions of this work are three-fold. 1) We explore a novel attack scenario where the attacker enforces a specific sample to be predicted as a target class by modifying the weights of a deployed model via bit flipping without any sample modification. 2) We formulate the attack as a BIP problem with the cardinality constraint and propose an effective and efficient method to solve this problem. 3) Extensive experiments verify the superiority of the proposed method against DNNs with or without defenses.
2 RELATED WORKS
Neural Network Weight Attack. How to perturb the weights of a trained DNN for malicious purposes received extensive attention (Liu et al., 2017a; 2018b; Hong et al., 2019). Liu et al. (2017a) firstly proposed two schemes to modify model parameters for misclassification without and with considering stealthiness, which is dubbed single bias attack (SBA) and gradient descent
attack (GDA) respectively. After that, Trojan attack (Liu et al., 2018b) was proposed, which injects malicious behavior to the DNN by generating a general trojan trigger and then retraining the model. This method requires to change lots of parameters. Recently, fault sneaking attack (FSA) (Zhao et al., 2019) was proposed, which aims to misclassify certain samples into a target class by modifying the DNN parameters with two constraints, including maintaining the classification accuracy of other samples and minimizing parameter modifications. Note that all those methods are designed to misclassify multiple samples instead of a specific sample, which may probably modify lots of parameters or degrade the accuracy of other samples sharply.
Bit-Flip based Attack. Recently, some physical fault injection techniques (Agoyan et al., 2010; Kim et al., 2014; Selmke et al., 2015) were proposed, which can be adopted to precisely flip any bit in the memory. Those techniques promote researchers to study how to modify model parameters at the bit-level. As a branch of weight attack, the bit-flip based attack was firstly explored in (Rakin et al., 2019). It proposed an untargeted attack that can convert the attacked DNN to a random output generator with several bit-flips. Besides, Rakin et al. (2020a) proposed the targeted bit Trojan (TBT) to inject the fault into DNNs by flipping some critical bits. Specifically, the attacker flips the identified bits to force the network to classify all samples embedded with a trigger to a certain target class, while the network operates with normal inference accuracy with benign samples. Most recently, Rakin et al. (2020b) proposed the targeted bit-flip attack (T-BFA), which achieves malicious purposes without modifying samples. Specifically, T-BFA can mislead samples from single source class or all classes to a target class by flipping the identified weight bits. It is worth noting that the above bit-flip based attacks leverage heuristic strategies to identify critical weight bits. How to find critical bits for the bit-flip based attack method is still an important open question.
3 TARGETED ATTACK WITH LIMITED BIT-FLIPS (TA-LBF)
3.1 PRELIMINARIES
Storage and Calculation of Quantized DNNs. Currently, it is a widely-used technique to quantize DNNs before deploying on devices for efficiency and reducing storage size. For each weight in l-th layer of a Q-bit quantized DNN, it will be represented and then stored as the signed integer in two’s complement representation (v = [vQ; vQ−1; ...; v1] ∈ {0, 1}Q) in the memory. Attacker can modify the weights of DNNs through flipping the stored binary bits. In this work, we adopt the layer-wise uniform weight quantization scheme similar to Tensor-RT (Migacz, 2017). Accordingly, each binary vector v can be converted to a real number by a function h(·), as follow:
h(v) = (−2Q−1 · vQ + Q−1∑ i=1 2i−1 · vi) ·∆l, (1)
where l indicates which layer the weight is from, ∆l > 0 is a known and stored constant which represents the step size of the l-th layer weight quantizer.
Notations. We denote a Q-bit quantized DNN-based classification model as f : X → Y , where X ∈ Rd being the input space and Y ∈ {1, 2, ...,K} being the K-class output space. Assuming that the last layer of this DNN model is a fully-connected layer with B ∈ {0, 1}K×C×Q being the quantized weights, where C is the dimension of last layer’s input. Let Bi,j ∈ {0, 1}Q be the two’s complement representation of a single weight and Bi ∈ {0, 1}C×Q denotes all the binary weights connected to the i-th output neuron. Given a test sample x with the ground-truth label s, f(x; Θ,B) ∈ [0, 1]K is the output probability vector and g(x; Θ) ∈ RC is the input of the last layer, where Θ denotes the model parameters without the last layer.
Attack Scenario. In this paper, we focus on the white-box bit-flip based attack, which was first introduced in (Rakin et al., 2019). Specifically, we assume that the attacker has full knowledge of the model (including it’s architecture, parameters, and parameters’ location in the memory), and can precisely flip any bit in the memory. Besides, we also assume that attackers can have access to a small portion of benign samples, but they can not tamper the training process and the training data.
Attacker’s Goals. Attackers have two main goals, including the effectiveness and the stealthiness. Specifically, effectiveness requires that the attacked model can misclassify a specific sample to a predefined target class without any sample modification, and the stealthiness requires that the prediction accuracy of other samples will not be significantly reduced.
3.2 THE PROPOSED METHOD
Loss for Ensuring Effectiveness. Recall that our first target is to force a specific image to be classified as the target class by modifying the model parameters at the bit-level. To this end, the most straightforward way is maximizing the logit of the target class while minimizing that of the source class. For a sample x, the logit of a class can be directly determined by the input of the last layer g(x; Θ) and weights connected to the node of that class. Accordingly, we can modify weights only connected to the source and target class to fulfill our purpose, as follows:
L1(x; Θ,B, B̂s, B̂t) = max ( m− p(x; Θ, B̂t) + δ, 0 ) + max ( p(x; Θ, B̂s)−m+ δ, 0 ) , (2)
where p(x; Θ, B̂i) = [h(B̂i,1);h(B̂i,2); ...;h(B̂i,C)]>g(x; Θ) denotes the logit of class i (i = s or i = t), h(·) is the function defined in Eq. (1), m = max
i∈{0,...,K}\{s} p(x; Θ,Bi), and δ ∈ R
indicates a slack variable, which will be specified in later experiments. The first term of L1 aims at increasing the logit of the target class, while the second term is to decrease the logit of the source class. The loss L1 is 0 only when the output on target class is more than m + δ and the output on source class is less than m − δ. That is, the prediction on x of the target model is the predefined target class. Note that B̂s, B̂t ∈ {0, 1}C×Q are two variables we want to optimize, corresponding to the weights of the fully-connected layer w.r.t. class s and t, respectively, in the target DNN model. B ∈ {0, 1}K×C×Q denotes the weights of the fully-connected layer of the original DNN model, and it is a constant tensor in L1. For clarity, hereafter we simplify L1(x; Θ,B, B̂s, B̂t) as L1(B̂s, B̂t), since x and Θ are also provided input and weights.
Loss for Ensuring Stealthiness. As we mentioned in Section 3.1, we assume that the attacker can get access to an auxiliary sample set {(xi, yi)}Ni=1. Accordingly, the stealthiness of the attack can be formulated as follows:
L2(B̂s, B̂t) = N∑ i=1 `(f(xi; Θ,B{1,...,K}\{s,t}, B̂s, B̂t), yi), (3)
where B{1,...,K}\{s,t} denotes {B1,B2, ...,BK}\{Bs,Bt}, and fj(xi; Θ,B{1,...,K}\{s,t}, B̂s, B̂t) indicates the posterior probability of xi w.r.t. class j, caclulated by Softmax(p(xi; Θ, B̂j)) or Softmax(p(xi; Θ,Bj)). `(·, ·) is specified by the cross entropy loss. To keep clarity, xi, Θ and B{1,...,K}\{s,t} are omitted in L2(B̂s, B̂t) . Besides, to better meet our goal, a straightforward additional approach is reducing the magnitude of the modification. In this paper, we constrain the number of bit-flips less than k. Physical bit flipping techniques can be time-consuming as discussed in (Van Der Veen et al., 2016; Zhao et al., 2019). Moreover, such techniques lead to abnormal behaviors in the attacked system (e.g., suspicious cache activity of processes), which may be detected by some physical detection-based defenses (Gruss et al., 2018). As such, minimizing the number of bit-flips is critical to make the attack more efficient and practical.
Overall Objective. In conclusion, the final objective function is as follows:
min B̂s,B̂t
L1(B̂s, B̂t) + λL2(B̂s, B̂t),
s.t. B̂s ∈ {0, 1}C×Q, B̂t ∈ {0, 1}C×Q, dH(Bs, B̂s) + dH(Bt, B̂t) ≤ k, (4)
where dH(·, ·) denotes the Hamming distance and λ > 0 is a trade-off parameter. For the sake of brevity, Bs and Bt are concatenated and further reshaped to the vector b ∈ {0, 1}2CQ. Similarly, B̂s and B̂t are concatenated and further reshaped to the vector b̂ ∈ {0, 1}2CQ. Besides, for binary vector b and b̂, there exists a nice relationship between Hamming distance and Euclidean distance: dH(b, b̂) = ||b− b̂||22. The new formulation of the objective is as follows:
min b̂
L1(b̂) + λL2(b̂), s.t. b̂ ∈ {0, 1}2CQ, ||b− b̂||22 − k ≤ 0. (5)
Problem (5) is denoted as TA-LBF (targeted attack with limited bit-flips). Note that TA-LBF is a binary integer programming (BIP) problem, whose optimization is challenging. We will introduce an effective and efficient method to solve it in the following section.
3.3 AN EFFECTIVE OPTIMIZATION METHOD FOR TA-LBF
To solve the challenging BIP problem (5), we adopt the generic solver for integer programming, dubbed `p-Box ADMM (Wu & Ghanem, 2018). The solver presents its superior performance in many tasks, e.g., model pruning (Li et al., 2019), clustering (Bibi et al., 2019), MAP inference (Wu et al., 2020a), adversarial attack (Fan et al., 2020), etc.. It proposed to replace the binary constraint equivalently by the intersection of two continuous constraints, as follows
b̂ ∈ {0, 1}2CQ ⇔ b̂ ∈ (Sb ∩ Sp), (6)
where Sb = [0, 1]2CQ indicates the box constraint, and Sp = {b̂ : ||b̂ − 12 || 2 2 = 2CQ 4 } denotes the `2-sphere constraint. Utilizing (6), Problem (5) is equivalently reformulated as
min b̂,u1∈Sb,u2∈Sp,u3∈R+
L1(b̂) + λL2(b̂), s.t. b̂ = u1, b̂ = u2, ||b− b̂||22 − k + u3 = 0, (7)
where two extra variables u1 and u2 are introduced to split the constraintsw.r.t. b̂. Besides, the nonnegative slack variable u3 ∈ R+ is used to transform ||b−b̂||22−k ≤ 0 in (5) into ||b−b̂||22−k+u3 = 0. The above constrained optimization problem can be efficiently solved by the alternating direction method of multipliers (ADMM) (Boyd et al., 2011).
Following the standard procedure of ADMM, we firstly present the augmented Lagrangian function of the above problem, as follows:
L(b̂,u1,u2, u3, z1, z2, z3) =L1(b̂) + λL2(b̂) + z>1 (b̂− u1) + z>2 (b̂− u2) +z3(||b− b̂||22 − k + u3) + c1(u1) + c2(u2) + c3(u3)
+ ρ1 2 ||b̂− u1||22 + ρ2 2 ||b̂− u2||22 + ρ3 2
(||b− b̂||22 − k + u3)2, (8)
where z1, z2 ∈ R2CQ and z3 ∈ R are dual variables, and ρ1, ρ2, ρ3 > 0 are penalty factors, which will be specified later. c1(u1) = I{u1∈Sb}, c2(u2) = I{u2∈Sp}, and c3(u3) = I{u3∈R+} capture the constraints Sb,Sp and R+, respectively. The indicator function I{a} = 0 if a is true; otherwise, I{a} = +∞. Based on the augmented Lagrangian function, the primary and dual variables are updated iteratively, with r indicating the iteration index.
Given (b̂r, zr1 , zr2 , zr3), update (u r+1 1 ,u r+1 2 , u r+1 3 ). Given (b̂r, zr1 , zr2 , zr3), (u1,u2, u3) are independent, and they can be optimized in parallel, as follows ur+11 = arg min u1∈Sb (zr1) >(b̂r − u1) + ρ12 ||b̂ r − u1||22 = PSb(b̂r + zr1 ρ1 ), ur+12 = arg min u2∈Sp (zr2) >(b̂r − u2) + ρ22 ||b̂ r − u2||22 = PSp(b̂r + zr2 ρ2 ), ur+13 = arg min u3∈R+ zr3(||b− b̂r||22 − k + u3) + ρ3 2 (||b− b̂ r||22 − k + u3)2
= PR+(−||b− b̂r||22 + k − zr3 ρ3 ),
(9)
where PSb(a) = min((1,max(0,a)) with a ∈ Rn is the projection onto the box constraint Sb; PSp(a) = √ n 2 ā ||a|| + 1 2 with ā = a − 1 2 indicates the projection onto the `2-sphere constraint Sp (Wu & Ghanem, 2018); PR+(a)=max(0, a) with a∈R indicates the projection onto R+.
Given (ur+11 ,u r+1 2 , u r+1 3 , z r 1 , z r 2 , z r 3), update b̂r+1. Although there is no closed-form solution to b̂r+1, it can be easily updated by the gradient descent method, as both L1(b̂) and L2(b̂) are differentiable w.r.t. b̂, as follows
b̂r+1 ← b̂r − η · ∂L(b̂,u r+1 1 ,u r+1 2 , u r+1 3 , z r 1 , z r 2 , z r 3)
∂b̂
∣∣∣ b̂=b̂r , (10)
where η > 0 denotes the step size. Note that we can run multiple steps of gradient descent in the above update. Both the number of steps and η will be specified in later experiments. Besides, due to the space limit, the detailed derivation of ∂L/∂b̂ will be presented in Appendix A.
Given (b̂r+1,ur+11 ,u r+1 2 , u r+1 3 ), update (z r+1 1 , z r+1 2 , z r+1 3 ). The dual variables are updated by the gradient ascent method, as follows zr+11 = z r 1 + ρ1(b̂ r+1 − ur+11 ), zr+12 = z r 2 + ρ2(b̂
r+1 − ur+12 ), zr+13 = z r 3 + ρ3(||b− b̂r+1||22 − k + ur+13 ).
(11)
Remarks. 1) Note that since (ur+11 ,u r+1 2 , u r+1 3 ) are updated in parallel, their updates belong to the same block. Thus, the above algorithm is a two-block ADMM algorithm. We provide the algorithm outline in Appendix B. 2) Except for the update of b̂r+1, all other updates are very simple and efficient. The computational cost of the whole algorithm will be analyzed in Appendix C. 3) Due to the inexact solution to b̂r+1 using gradient descent, the theoretical convergence of the whole ADMM algorithm cannot be guaranteed. However, as demonstrated in many previous works (Gol’shtein & Tret’yakov, 1979; Eckstein & Bertsekas, 1992; Boyd et al., 2011), the inexact two-block ADMM often shows good practical convergence, which is also the case in our later experiments. Besides, the numerical convergence analysis is presented in Appendix D. 4) The proper adjustment of (ρ1, ρ2, ρ3) could accelerate the practical convergence, which will be specified later .
4 EXPERIMENTS
4.1 EVALUATION SETUP
Settings. We compare our method (TA-LBF) with GDA (Liu et al., 2017a), FSA (Zhao et al., 2019), T-BFA (Rakin et al., 2020b), and TBT (Rakin et al., 2020a). All those methods can be adopted to misclassify a specific image into a target class. We also take the fine-tuning (FT) of the last fully-connected layer as a baseline method. We conduct experiments on CIFAR-10 (Krizhevsky et al., 2009) and ImageNet (Russakovsky et al., 2015). We randomly select 1,000 images from each dataset as the evaluation set for all methods. Specifically, for each of the 10 classes in CIFAR-10, we perform attacks on the 100 randomly selected validation images from the other 9 classes. For ImageNet, we randomly choose 50 target classes. For each target class, we perform attacks on 20 images randomly selected from the rest classes in the validation set. Besides, for all methods except GDA which does not employ auxiliary samples, we provide 128 and 512 auxiliary samples on CIFAR-10 and ImageNet, respectively. Following the setting in (Rakin et al., 2020a;b), we adopt the quantized ResNet (He et al., 2016) and VGG (Simonyan & Zisserman, 2015) as the target models. For our TA-LBF, the trade-off parameter λ and the constraint parameter k affect the attack stealthiness and the attack success rate. We adopt a strategy for jointly searching λ and k, which is specified in Appendix E.3. More descriptions of our settings are provided in Appendix E.
Evaluation Metrics. We adopt three metrics to evaluate the attack performance, i.e., the post attack accuracy (PA-ACC), the attack success rate (ASR), and the number of bit-flips (Nflip). PA-ACC denotes the post attack accuracy on the validation set except for the specific attacked sample and the auxiliary samples. ASR is defined as the ratio of attacked samples that are successfully attacked into the target class among all 1,000 attacked samples. Nflip is the number of bit-flips required for an attack. A better attack performance corresponds to a higher PA-ACC and ASR, while a lower Nflip. Besides, we also show the accuracy of the original model, denoted as ACC.
4.2 MAIN RESULTS
Results on CIFAR-10. The results of all methods on CIFAR-10 are shown in Table 1. Our method achieves a 100% ASR with the fewest Nflip for all the bit-widths and architectures. FT modifies the maximum number of bits among all methods since there is no limitation of parameter modifications. Due to the absence of the training data, the PA-ACC of FT is also poor. These results indicate that fine-tuning the trained DNN as an attack method is infeasible. Although T-BFA flips the secondfewest bits under three cases, it fails to achieve a higher ASR than GDA and FSA. In terms of PA-ACC, TA-LBF is comparable to other methods. Note that the PA-ACC of TA-LBF significantly outperforms that of GDA, which is the most competitive w.r.t. ASR and Nflip among all the baseline methods. The PA-ACC of GDA is relatively poor, because it does not employ auxiliary samples. Achieving the highest ASR, the lowest Nflip, and the comparable PA-ACC demonstrates that our optimization-based method is more superior than other heuristic methods (TBT, T-BFA and GDA).
Results on ImageNet. The results on ImageNet are shown in Table 1. It can be observed that GDA shows very competitive performance compared to other methods. However, our method obtains the highest PA-ACC, the fewest bit-flips (less than 8), and a 100% ASR in attacking ResNet. For VGG, our method also achieves a 100% ASR with the fewest Nflip for both bit-widths. The Nflip results of our method are mainly attributed to the cardinality constraint on the number of bit-flips. Moreover, for our method, the average PA-ACC degradation over four cases on ImageNet is only 0.06%, which demonstrates the stealthiness of our attack. When comparing the results of ResNet and VGG, an interesting observation is that all methods require significantly more bit-flips for VGG. One reason is that VGG is much wider than ResNet. Similar to the claim in (He et al., 2020), increasing the network width contributes to the robustness against the bit-flip based attack.
4.3 RESISTANCE TO DEFENSE METHODS
Resistance to Piece-wise Clustering. He et al. (2020) proposed a novel training technique, called piece-wise clustering, to enhance the network robustness against the bit-flip based attack. Such a training technique introduces an additional weight penalty to the inference loss, which has the effect of eliminating close-to-zero weights (He et al., 2020). We test the resistance of all attack methods to the piece-wise clustering. We conduct experiments with the 8-bit quantized ResNet on CIFAR-10 and ImageNet. Following the ideal configuration in (He et al., 2020), the clustering coefficient, which is a hyper-parameter of piece-wise clustering, is set to 0.001 in our evaluation. For our method, the initial k is set to 50 on ImageNet and the rest settings are the same as those in Section 4.1. Besides the three metrics in Section 4.1, we also present the number of increased Nflip compared to the model without defense (i.e., results in Table 1), denoted as ∆Nflip.
The results of the resistance to the piece-wise clustering of all attack methods are shown in Table 2. It shows that the model trained with piece-wise clustering can improve the number of required bit-flips for all attack methods. However, our method still achieves a 100% ASR with the least number of bit-flips on both two datasets. Although TBT achieves a smaller ∆Nflip than ours on CIFAR-10, its ASR is only 52.3%, which also verifies the defense effectiveness of the piece-wise clustering. Compared with other methods, TA-LBF achieves the fewest ∆Nflip on ImageNet and the best PA-ACC on both datasets. These results demonstrate the superiority of our method over other methods when attacking models trained with piece-wise clustering.
Resistance to Larger Model Capacity. Previous studies (He et al., 2020; Rakin et al., 2020b) observed that increasing the network capacity can improve the robustness against the bit-flip based attack. Accordingly, we evaluate all attack methods against the models with a larger capacity using the 8-bit quantized ResNet on both datasets. Similar to the strategy in (He et al., 2020), we increase the model capacity by varying the network width (i.e., 2× width in our experiments). All settings of our method are the same as those used in Section 4.1.
The results are presented in Table 2. We observe that all methods require more bit-flips to attack the model with the 2× width. To some extent, it demonstrates that the wider network with the same architecture is more robust against the bit-flip based attack. However, our method still achieves a 100% ASR with the fewest Nflip and ∆Nflip. Moreover, when comparing the two defense methods, we find that piece-wise clustering performs better than the model with a larger capacity in terms of ∆Nflip. However, piece-wise clustering training also causes the accuracy decrease of the original model (e.g., from 92.16% to 91.01% on CIFAR-10). We provide more results in attacking models with defense under different settings in Appendix F.
4.4 ABLATION STUDY
We perform ablation studies on parameters λ and k, and the number of auxiliary samplesN . We use the 8-bit quantized ResNet on CIFAR-10 as the representative for analysis. We discuss the attack performance of TA-LBF under different values of λ while k is fixed at 20, and under different values of k while λ is fixed at 10. To analyze the effect ofN , we configureN from 25 to 800 and keep other settings the same as those in Section 4.1. The results are presented in Fig. 2. We observe that our method achieves a 100% ASR when λ is less than 20. As expected, the PA-ACC increases while the ASR decreases along with the increase of λ. The plot of parameter k presents that k can exactly limit the number of bit-flips, while other attack methods do not involve such constraint. This advantage is critical since it allows the attacker to identify limited bits to perform an attack when the budget is fixed. As shown in the figure, the number of auxiliary samples less than 200 has a marked positive impact on the PA-ACC. It’s intuitive that more auxiliary samples can lead to a better PA-ACC. The observation also indicates that TA-LBF still works well without too many auxiliary samples.
4.5 VISUALIZATION OF DECISION BOUNDARY
To further compare FSA and GDA with our method, we visualize the decision boundaries of the original and the post attack models in Fig. 3. We adopt a four-layer Multi-Layer Perceptron trained with the simulated 2-D Blob dataset from 4 classes. The original decision boundary indicates that the original model classifies all data points almost perfectly. The attacked sample is classified into Class 3 by all methods. Visually, GDA modifies the decision boundary drastically, especially for Class 0. However, our method modifies the decision boundary mainly around the attacked sample. Althoug FSA is comparable to ours visually in Fig. 3, it flips 10× bits than GDA and TA-LBF. In terms of the numerical results, TA-LBF achieves the best PA-ACC and the fewest Nflip. This finding verifies that our method can achieve a successful attack even only tweaking the original classifier.
5 CONCLUSION
In this work, we have presented a novel attack paradigm that the weights of a deployed DNN can be slightly changed via bit flipping in the memory, to give a target prediction for a specific sample, while the predictions on other samples are not significantly influenced. Since the weights are stored as binary bits in the memory, we formulate this attack as a binary integer programming (BIP) problem, which can be effectively and efficiently solved by a continuous algorithm. Since the critical bits are determined through optimization, the proposed method can achieve the attack goals by flipping a few bits, and it shows very good performance under different experimental settings.
ACKNOWLEDGMENTS
This work is supported in part by the National Key Research and Development Program of China under Grant 2018YFB1800204, the National Natural Science Foundation of China under Grant 61771273, the R&D Program of Shenzhen under Grant JCYJ20180508152204044. Baoyuan Wu is supported by the Natural Science Foundation of China under grant No. 62076213, and the university development fund of the Chinese University of Hong Kong, Shenzhen under grant No. 01001810.
B ALGORITHM OUTLINE
Algorithm 1 Continuous optimization for the BIP problem (5). Input: The original quantized DNN model f with weights Θ,B, attacked sample x with groundtruth label s, target class t, auxiliary sample set {(xi, yi)}Ni=1, hyper-parameters λ, k, and δ. Output: b̂.
1: Initial u01, u 0 2, u 0 3, z 0 1 , z 0 2 , z 0 3 , b̂ 0 and let r ← 0; 2: while not converged do 3: Update ur+11 , u r+1 2 and u r+1 3 as Eq. (9); 4: Update b̂r+1 as Eq. (10); 5: Update zr+11 , z r+1 2 and z r+1 3 as Eq. (11); 6: r ← r + 1. 7: end while
C COMPLEXITY ANALYSIS
The computational complexity of the proposed algorithm (i.e., Algorithm 1) consists of two parts, the forward and backward pass. In terms of the forward pass, since Θ,B{1,...,K}\{s,t} are fixed during the optimization, their involved terms, including g(x; Θ) and p(x; Θ,Bi)|i 6=s,t, are calculated only one time. The main cost from B̂s and B̂t is O(2(N + 1)C2Q) per iteration, as there are N + 1 samples. In terms of the backward pass, the main cost is from the update of b̂r+1, which is O(2(N + 1)CQ) per iteration in the gradient descent. Since all other updates are very simple, their costs are omitted here. Thus, the overall computational cost is O ( Touter[2(N + 1)CQ · (C + Tinner)] ) , with Touter being the iteration of the overall algorithm and Tinner indicating the number of gradient steps in updating b̂r+1. As shown in Section D, the proposed method TA-LBF always converges very fast in our experiments, thus Touter is not very large. As demonstrated in Section E.3, Tinner is set to 5 in our experiments. In short, the proposed method can be optimized very efficiently.
Besides, we also compare the computational complexity of different attacks empirically. Specifically, we compare the running time of attacking one image of different methods against the 8-bit quantized ResNet on CIFAR-10 and ImageNet dataset. As shown in Table 3, TBT is the most timeconsuming method among all attacks. Although the proposed TA-LBF is not superior to T-BFA, FSA, and GDA in running time, this gap can be tolerated when attacking a single image in the deployment stage. Besides, our method performs better in terms of PA-ACC, ASR, and Nflip as demonstrated in our experiments.
D NUMERICAL CONVERGENCE ANALYSIS
We present the numerical convergence of TA-LBF in Fig. 4. Note that ||b̂ − u1||22 and ||b̂ − u2||22 characterize the degree of satisfaction of the box and `2-sphere constraint, respectively. For the two examples of CIFAR-10 and ImageNet, the values of both indicators first increase, then drop, and finally close to 0. Another interesting observation is that L1 + λL2 first decreases evidently and then increases slightly. Such findings illustrate the optimization process of TA-LBF. In the early iterations, modifying the model parameters tends to achieve the two goals mentioned in Section 3.1; in the late iterations, b̂ is encouraged to satisfy the box and l2-sphere constraint. We also observe that both examples stop when meeting ||b̂ − u1||22 ≤ 10−4 and ||b̂ − u2||22 ≤ 10−4 and do not
exceed the maximum number of iterations (i.e., 2000). The numerical results demonstrate the fast convergence of our method in practice.
E EVALUATION SETUP
E.1 BASELINE METHODS
Since GDA (Liu et al., 2017a) and FSA (Zhao et al., 2019) are originally designed for attacking the full-precision network, we adapt these two methods to attack the quantized network by applying quantization-aware training (Jacob et al., 2018). We adopt the `0-norm for FSA (Liu et al., 2017a) and modification compression for GDA (Zhao et al., 2019) to reduce the number of the modified parameters. Among three types of T-BFA (Rakin et al., 2020b), we compare to the most comparable method: the 1-to-1 stealthy attack scheme. The purpose of this attack scheme is to misclassify samples of a single source class into the target class while maintaining the prediction accuracy of other samples. Besides, we take the fine-tuning (FT) of the last fully-connected layer as a basic attack and present its results. We perform attack once for each selected image except TBT (Rakin et al., 2020a) and totally 1,000 attacks on each dataset. The attack objective of TBT is that the attacked DNN model misclassifies all inputs with a trigger to a certain target class. Due to such objective, the number of attacks for TBT is equal to the number of target classes (i.e., 10 attacks on CIFAR-10 and 50 attacks on ImageNet).
E.2 TARGET MODELS
According to the setting in (Rakin et al., 2020a;b), we adopt two popular network architectures: ResNet (He et al., 2016) and VGG (Simonyan & Zisserman, 2015) for evaluation. On CIFAR-10, we perform experiments on ResNet-20 and VGG-16. On ImageNet, we use the pre-trained ResNet18* and VGG-16† network. We quantize all networks to the 4-bit and 8-bit quantization level using the layer-wise uniform weight quantization scheme, which is similar to the one involved in the Tensor-RT solution (Migacz, 2017).
E.3 PARAMETER SETTINGS OF TA-LBF
For each attack, we adopt a strategy for jointly searching λ and k. Specifically, for an initially given k, we search λ from a relatively large initial value and divide it by 2 if the attack does not succeed. The maximum search times of λ for a fixed k is set to 8. If it exceeds the maximum search times,
*Downloaded from https://download.pytorch.org/models/resnet18-5c106cde.pth †Downloaded from https://download.pytorch.org/models/vgg16_bn-6c64b313.pth
we double k and search λ from the relatively large initial value. The maximum search times of k is set to 4. On CIFAR-10, the initial k and λ are set to 5 and 100. On ImageNet, λ is initialized as 104; k is initialized as 5 and 50 for ResNet and VGG, respectively. On CIFAR-10, the δ in L1 is set to 10. On ImageNet, the δ is set to 3 and increased to 10 if the attack fails. u1 and u2 are initialized as b and u3 is initialized as 0. z1 and z2 are initialized as 0 and z3 is initialized as 0. b̂ is initialized as b. During each iteration, the number of gradient steps for updating b̂ is 5 and the step size is set to 0.01 on both datasets. Hyper-parameters (ρ1, ρ2, ρ3) (see Eq. (11)) are initialized as (10−4, 10−4, 10−5) on both datasets, and increase by ρi ← ρi×1.01, i = 1, 2, 3 after each iteration. The maximum values of (ρ1, ρ2, ρ3) are set to (50, 50, 5) on both datasets. Besides the maximum number of iterations (i.e., 2000), we also set another stopping criterion, i.e., ||b̂−u1||22 ≤ 10−4 and ||b̂− u2||22 ≤ 10−4.
F MORE RESULTS ON RESISTANCE TO DEFENSE METHODS
F.1 RESISTANCE TO PIECE-WISE CLUSTERING
We conduct experiments using the 8-bit quantized ResNet on CIFAR-10 with different clustering coefficients. We set the maximum search times of k to 5 for clustering coefficient 0.005 and 0.01 and keep the rest settings the same as those in Section 4.1. The results are presented in Table 4. As shown in the table, all values of Nflip are larger than attacking models without defense for all methods, which is similar to Table 2. Our method achieves a 100% ASR with the fewest Nflip under the three clustering coefficients. Although TBT obtains a smaller ∆Nflip than our method, it fails to achieve a satisfactory ASR. For example, TBT achieves only a 10.1% ASR when the clustering coefficient is set to 0.01. We observe that for all clustering coefficients, piece-wise clustering reduces the original accuracy. Such a phenomenon is more significant as the clustering coefficient increases. The results also show that there is no guarantee that if the clustering coefficient is larger (e.g., 0.01), the model is more robust, which is consistent with the finding in (He et al., 2020).
F.2 RESISTANCE TO LARGER MODEL CAPACITY
Besides the results of networks with a 2× width shown in Section 4.3, we also evaluate all methods against models with a 3× and 4× width. All settings are the same as those used in Section 4.1. The results are provided in Table 5. Among all attack methods, our method is least affected by increasing the network width. Especially for the network with a 4× width, our ∆Nflip is only 2.80. The results demonstrate the superiority of the formulated BIP problem and optimization. Moreover, compared with piece-wise clustering, having a larger model capacity can improve the original accuracy, but increases the model size and the computation complexity.
G DISCUSSIONS
G.1 COMPARING BACKDOOR, ADVERSARIAL, AND WEIGHT ATTACK
An attacker can achieve malicious purposes utilizing backdoor, adversarial, and weight attacks. In this section, we emphasize the differences among them.
Backdoor attack happens in the training stage and requires that the attacker can tamper the training data even the training process (Liu et al., 2020b; Li et al., 2020). Through poisoning some training samples with a trigger, the attacker can control the behavior of the attacked DNN in the inference stage. For example, images with reflections are misclassified into a target class, while benign images are classified normally (Liu et al., 2020a). However, such an attack paradigm causes the accuracy degradation on benign samples, which makes it detectable for users. Besides, these methods also require to modify samples in the inference stage, which is sometimes impossible for the attacker. Many defense methods against backdoor attack have been proposed, such as the preprocessingbased defense (Liu et al., 2017b), the model reconstruction-based defense (Liu et al., 2018a), and the trigger synthesis-based defense (Wang et al., 2019).
Adversarial attack modifies samples in the inference stage by adding small perturbations that remain imperceptible to the human vision system (Akhtar & Mian, 2018). Since adversarial attack only modifies inputs while keeping the model unchanged, it has no effect on the benign samples. Besides the basic white-box attack, the black-box attack (Wu et al., 2020b; Chen et al., 2020) and universal attack (Zhang et al., 2020b;a) have attracted wide attention. Inspired by its success in the classification, it also has been extended to other tasks, including image captioning (Xu et al., 2019), retrieval (Bai et al., 2020; Feng et al., 2020), etc.. Similarly, recent studies have demonstrated many defense methods against adversarial attack, including the preprocessing-based defense (Xie et al., 2018), the detection-based defense (Xu et al., 2017), and the adversarial learning-based defense (Carmon et al., 2019; Wu et al., 2020c).
Weight attack modifies model parameters in the deployment stage, which is the studied paradigm in this work. Weight attack generally aims at misleading the DNN model on the selected sample(s), while having a minor effect on other samples (Zhao et al., 2019; Rakin et al., 2020b). Many studies (Yao et al., 2020; Breier et al., 2018; Pan, 2020) have demonstrated that the DNN parameters can be modified in the bit-level in memory using fault injection techniques (Agoyan et al., 2010; Kim et al., 2014; Selmke et al., 2015) in practice. Note that the defense methods against weight attack have been not well studied. Although some defense methods (He et al., 2020) were proposed, they cannot achieve satisfactory performance. For example, our method can still achieve a 100% attack success rate against two proposed defense methods. Our work would encourage further investigation on the security of the model parameters from both attack and defense sides.
G.2 COMPARING TA-LBF WITH OTHER WEIGHT ATTACKS
We compare our TA-LBF with other weight attack methods, including TBT (Rakin et al., 2020a), TBFA (Rakin et al., 2020b), GDA (Liu et al., 2017a), and FSA (Zhao et al., 2019) in this section. TBT tampers both the test sample and the model parameters. Specifically, it first locates critical bits and generates a trigger, and then flips these bits to classify all inputs embedded with the trigger to a target class. However, the malicious samples are easily detected by human inspection or many detection methods (Tran et al., 2018; Du et al., 2020). We do not modify the samples to perform TA-LBF, which makes the attack more stealthy. Rakin et al. (2020b) proposed T-BFA which misclassifies all samples (N-to-1 version) or samples from a source class (1-to-1 version) into a target class. Our method aims at misclassifying a specific sample, which meets the attacker’s requirement in some scenarios. For example, the attacker wants to manipulate the behavior of a face recognition engine on a specific input. Since it affects multiple samples, T-BFA maybe not stealthy enough in attacking real-world applications. GDA (Liu et al., 2017a) and FSA (Zhao et al., 2019) modify model parameters at the weight-level rather than bit-level. They are designed for misclassifying multiple samples from arbitrary classes, which makes it infeasible for them to only modify the parameters connected to the source and target class. They modify more parameters than our method as shown in the experiments, it might be due to the reason discussed above. Besides, TBT, T-BFA, and GDA determine the critical weights to modify using heuristic strategies, while our TA-LBF adopts optimization-based methods. Although FSA applies ADMM for solving the optimization problem, it has no explicit constraint to control the number of modified parameters, which makes it intends to modify more parameters than GDA and our TA-LBF.
H TRADE-OFF BETWEEN THREE EVALUATION METRICS
In this section, we investigate the trade-off between three adopted evaluation metrics (i.e., PA-ACC, ASR, and Nflip) for our attack. All experiments are conducted on CIFAR-10 and ImageNet dataset in attacking the 8-bit quantized ResNet.
We firstly discuss the trade-off between PA-ACC and Nflip by fixing the ASR as 100% using the search strategy in Appendix E.3 and adjusting the initial λ and k to obtain different attack results. The two curves on the left show that increasing the Nflip can improve the PA-ACC when Nflip is relatively small; the PA-ACC decreases with the increase of Nflip when Nflip is greater than a threshold. This phenomenon demonstrates that constraining the number of bit-flips is essential to ensure the attack stealthiness, as mentioned in Section 3.2. To study the trade-off between PA-ACC
and ASR, we fix the parameter k as 10 for approximately 10 bit-flips and adjust the parameter λ to obtain different PA-ACC and ASR results. The trade-off curves between PA-ACC and ASR show that increasing ASR can decrease the PA-ACC significantly. Therefore, how to achieve high ASR and PA-ACC simultaneously is still an important open problem. | 1. What is the focus of the paper regarding neural network classifiers?
2. What kind of attack does the paper propose, and how does it differ from other attacks in the literature?
3. How effective and stealthy is the proposed attack, according to the reported experiments?
4. How does the attack perform on different levels of quantization, hyperparameter values, and more robust models?
5. Is there any limitation or concern regarding the applicability or generalizability of the attack? | Review | Review
The paper describes a bit-flipping white-box attack on deployed neural network classifiers: given a model with quantized parameters, find a perturbation of the parameters bits such that the model with misclassify one specific example, while maintaining high accuracy on other examples.
The attack is formulated as a binary programmig problem where the parameter bits are the optimization variables and the objective function is an additive tradeoff between an effectiveness term (misclassification loss on the selected example) and a stealthness loss (classification loss on a batch of training examples), a constraint on the number of bit flips is also included. The optimization problem is solved by continuous relaxation using the Lp-box ADMM solver.
The paper reports experiments on various standard classifiers trained on CIFAR-10 or ImageNet, with different level of quantization. The proposed attack is compared to other weight attacks in the literature, and it achieves comparable or better attack success rate (a measure of effectiveness) and post-attack accuracy (a measure of stealthness). There are also experiments on different values of hyperparameters and on more robust models (obtained either by a defense technique or by making the model bigger).
Overall I find this a valid contribution. |
ICLR | Title
Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits
Abstract
To explore the vulnerability of deep neural networks (DNNs), many attack paradigms have been well studied, such as the poisoning-based backdoor attack in the training stage and the adversarial attack in the inference stage. In this paper, we study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes. Specifically, our goal is to misclassify a specific sample into a target class without any sample modification, while not significantly reduce the prediction accuracy of other samples to ensure the stealthiness. To this end, we formulate this problem as a binary integer programming (BIP), since the parameters are stored as binary bits (i.e., 0 and 1) in the memory. By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem, which can be effectively and efficiently solved using the alternating direction method of multipliers (ADMM) method. Consequently, the flipped critical bits can be easily determined through optimization, rather than using a heuristic strategy. Extensive experiments demonstrate the superiority of our method in attacking DNNs. The code is available at: https://github.com/jiawangbai/TA-LBF.
1 INTRODUCTION
Due to the great success of deep neural networks (DNNs), its vulnerability (Szegedy et al., 2014; Gu et al., 2019) has attracted great attention, especially for security-critical applications (e.g., face recognition (Dong et al., 2019) and autonomous driving (Eykholt et al., 2018)). For example, backdoor attack (Saha et al., 2020; Xie et al., 2019) manipulates the behavior of the DNN model by mainly poisoning some training data in the training stage; adversarial attack (Goodfellow et al., 2015; Moosavi-Dezfooli et al., 2017) aims to fool the DNN model by adding malicious perturbations onto the input in the inference stage.
Compared to the backdoor attack and adversarial attack, a novel attack paradigm, dubbed weight attack (Breier et al., 2018), has been rarely studied. It assumes that the attacker has full access to the memory of a device, such that he/she can directly change the parameters of a deployed model to achieve some malicious purposes (e.g., crushing a fully functional DNN and converting it to a random output generator (Rakin et al., 2019)). Since weight attack neither modifies the input nor control the training process, both the service provider and the user are difficult to realize the existence of the attack. In practice, since the deployed DNN model is stored as binary bits in the memory, the attacker can modify the model parameters using some physical fault injection techniques, such as Row Hammer Attack (Agoyan et al., 2010; Selmke et al., 2015) and Laser Beam Attack (Kim et al., 2014). These techniques can precisely flip any bit of the data in the memory. Some previous works (Rakin et al., 2019; 2020a;b) have demonstrated that it is feasible to change the model weights via bit flipping to achieve some malicious purposes. However, the critical bits are identified mostly
†This work was done when Jiawang Bai was an intern at Tencent AI Lab. Correspondence to: Baoyuan Wu (wubaoyuan@cuhk.edu.cn) and Shu-Tao Xia (xiast@sz.tsinghua.edu.cn).
using some heuristic strategies in their methods. For example, Rakin et al. (2019) combined gradient ranking and progressive search to identify the critical bits for flipping.
This work also focuses on the bit-level weight attack against DNNs in the deployment stage, whereas with two different goals, including effectiveness and stealthiness. The effectiveness requires that the attacked model can misclassify a specific sample to a attacker-specified target class without any sample modification, while the stealthiness encourages that the prediction accuracy of other samples will not be significantly reduced. As shown in Fig. 1, to achieve these goals, we propose to identify and flip bits that are critical to the prediction of the specific sample but not significantly impact the prediction of other samples. Specifically, we treat each bit in the memory as a binary variable, and our task is to determine its state (i.e., 0 or 1). Accordingly, it can be formulated as a binary integer programming (BIP) problem. To further improve the stealthiness, we also limit the number of flipped bits, which can be formulated as a cardinality constraint. However, how to solve the BIP problem with a cardinality constraint is a challenging problem. Fortunately, inspired by an advanced optimization method, the `p-box ADMM (Wu & Ghanem, 2018), this problem can be reformulated as a continuous optimization problem, which can further be efficiently and effectively solved by the alternating direction method of multipliers (ADMM) (Glowinski & Marroco, 1975; Gabay & Mercier, 1976). Consequently, the flipped bits can be determined through optimization rather than the original heuristic strategy, which makes our attack more effective. Note that we also conduct attack against the quantized DNN models, following the setting in some related works (Rakin et al., 2019; 2020a). Extensive experiments demonstrate the superiority of the proposed method over several existing weight attacks. For example, our method achieves a 100% attack success rate with 7.37 bit-flips and 0.09% accuracy degradation of the rest unspecific inputs in attacking a 8-bit quantized ResNet-18 model on ImageNet. Moreover, we also demonstrate that the proposed method is also more resistant to existing defense methods.
The main contributions of this work are three-fold. 1) We explore a novel attack scenario where the attacker enforces a specific sample to be predicted as a target class by modifying the weights of a deployed model via bit flipping without any sample modification. 2) We formulate the attack as a BIP problem with the cardinality constraint and propose an effective and efficient method to solve this problem. 3) Extensive experiments verify the superiority of the proposed method against DNNs with or without defenses.
2 RELATED WORKS
Neural Network Weight Attack. How to perturb the weights of a trained DNN for malicious purposes received extensive attention (Liu et al., 2017a; 2018b; Hong et al., 2019). Liu et al. (2017a) firstly proposed two schemes to modify model parameters for misclassification without and with considering stealthiness, which is dubbed single bias attack (SBA) and gradient descent
attack (GDA) respectively. After that, Trojan attack (Liu et al., 2018b) was proposed, which injects malicious behavior to the DNN by generating a general trojan trigger and then retraining the model. This method requires to change lots of parameters. Recently, fault sneaking attack (FSA) (Zhao et al., 2019) was proposed, which aims to misclassify certain samples into a target class by modifying the DNN parameters with two constraints, including maintaining the classification accuracy of other samples and minimizing parameter modifications. Note that all those methods are designed to misclassify multiple samples instead of a specific sample, which may probably modify lots of parameters or degrade the accuracy of other samples sharply.
Bit-Flip based Attack. Recently, some physical fault injection techniques (Agoyan et al., 2010; Kim et al., 2014; Selmke et al., 2015) were proposed, which can be adopted to precisely flip any bit in the memory. Those techniques promote researchers to study how to modify model parameters at the bit-level. As a branch of weight attack, the bit-flip based attack was firstly explored in (Rakin et al., 2019). It proposed an untargeted attack that can convert the attacked DNN to a random output generator with several bit-flips. Besides, Rakin et al. (2020a) proposed the targeted bit Trojan (TBT) to inject the fault into DNNs by flipping some critical bits. Specifically, the attacker flips the identified bits to force the network to classify all samples embedded with a trigger to a certain target class, while the network operates with normal inference accuracy with benign samples. Most recently, Rakin et al. (2020b) proposed the targeted bit-flip attack (T-BFA), which achieves malicious purposes without modifying samples. Specifically, T-BFA can mislead samples from single source class or all classes to a target class by flipping the identified weight bits. It is worth noting that the above bit-flip based attacks leverage heuristic strategies to identify critical weight bits. How to find critical bits for the bit-flip based attack method is still an important open question.
3 TARGETED ATTACK WITH LIMITED BIT-FLIPS (TA-LBF)
3.1 PRELIMINARIES
Storage and Calculation of Quantized DNNs. Currently, it is a widely-used technique to quantize DNNs before deploying on devices for efficiency and reducing storage size. For each weight in l-th layer of a Q-bit quantized DNN, it will be represented and then stored as the signed integer in two’s complement representation (v = [vQ; vQ−1; ...; v1] ∈ {0, 1}Q) in the memory. Attacker can modify the weights of DNNs through flipping the stored binary bits. In this work, we adopt the layer-wise uniform weight quantization scheme similar to Tensor-RT (Migacz, 2017). Accordingly, each binary vector v can be converted to a real number by a function h(·), as follow:
h(v) = (−2Q−1 · vQ + Q−1∑ i=1 2i−1 · vi) ·∆l, (1)
where l indicates which layer the weight is from, ∆l > 0 is a known and stored constant which represents the step size of the l-th layer weight quantizer.
Notations. We denote a Q-bit quantized DNN-based classification model as f : X → Y , where X ∈ Rd being the input space and Y ∈ {1, 2, ...,K} being the K-class output space. Assuming that the last layer of this DNN model is a fully-connected layer with B ∈ {0, 1}K×C×Q being the quantized weights, where C is the dimension of last layer’s input. Let Bi,j ∈ {0, 1}Q be the two’s complement representation of a single weight and Bi ∈ {0, 1}C×Q denotes all the binary weights connected to the i-th output neuron. Given a test sample x with the ground-truth label s, f(x; Θ,B) ∈ [0, 1]K is the output probability vector and g(x; Θ) ∈ RC is the input of the last layer, where Θ denotes the model parameters without the last layer.
Attack Scenario. In this paper, we focus on the white-box bit-flip based attack, which was first introduced in (Rakin et al., 2019). Specifically, we assume that the attacker has full knowledge of the model (including it’s architecture, parameters, and parameters’ location in the memory), and can precisely flip any bit in the memory. Besides, we also assume that attackers can have access to a small portion of benign samples, but they can not tamper the training process and the training data.
Attacker’s Goals. Attackers have two main goals, including the effectiveness and the stealthiness. Specifically, effectiveness requires that the attacked model can misclassify a specific sample to a predefined target class without any sample modification, and the stealthiness requires that the prediction accuracy of other samples will not be significantly reduced.
3.2 THE PROPOSED METHOD
Loss for Ensuring Effectiveness. Recall that our first target is to force a specific image to be classified as the target class by modifying the model parameters at the bit-level. To this end, the most straightforward way is maximizing the logit of the target class while minimizing that of the source class. For a sample x, the logit of a class can be directly determined by the input of the last layer g(x; Θ) and weights connected to the node of that class. Accordingly, we can modify weights only connected to the source and target class to fulfill our purpose, as follows:
L1(x; Θ,B, B̂s, B̂t) = max ( m− p(x; Θ, B̂t) + δ, 0 ) + max ( p(x; Θ, B̂s)−m+ δ, 0 ) , (2)
where p(x; Θ, B̂i) = [h(B̂i,1);h(B̂i,2); ...;h(B̂i,C)]>g(x; Θ) denotes the logit of class i (i = s or i = t), h(·) is the function defined in Eq. (1), m = max
i∈{0,...,K}\{s} p(x; Θ,Bi), and δ ∈ R
indicates a slack variable, which will be specified in later experiments. The first term of L1 aims at increasing the logit of the target class, while the second term is to decrease the logit of the source class. The loss L1 is 0 only when the output on target class is more than m + δ and the output on source class is less than m − δ. That is, the prediction on x of the target model is the predefined target class. Note that B̂s, B̂t ∈ {0, 1}C×Q are two variables we want to optimize, corresponding to the weights of the fully-connected layer w.r.t. class s and t, respectively, in the target DNN model. B ∈ {0, 1}K×C×Q denotes the weights of the fully-connected layer of the original DNN model, and it is a constant tensor in L1. For clarity, hereafter we simplify L1(x; Θ,B, B̂s, B̂t) as L1(B̂s, B̂t), since x and Θ are also provided input and weights.
Loss for Ensuring Stealthiness. As we mentioned in Section 3.1, we assume that the attacker can get access to an auxiliary sample set {(xi, yi)}Ni=1. Accordingly, the stealthiness of the attack can be formulated as follows:
L2(B̂s, B̂t) = N∑ i=1 `(f(xi; Θ,B{1,...,K}\{s,t}, B̂s, B̂t), yi), (3)
where B{1,...,K}\{s,t} denotes {B1,B2, ...,BK}\{Bs,Bt}, and fj(xi; Θ,B{1,...,K}\{s,t}, B̂s, B̂t) indicates the posterior probability of xi w.r.t. class j, caclulated by Softmax(p(xi; Θ, B̂j)) or Softmax(p(xi; Θ,Bj)). `(·, ·) is specified by the cross entropy loss. To keep clarity, xi, Θ and B{1,...,K}\{s,t} are omitted in L2(B̂s, B̂t) . Besides, to better meet our goal, a straightforward additional approach is reducing the magnitude of the modification. In this paper, we constrain the number of bit-flips less than k. Physical bit flipping techniques can be time-consuming as discussed in (Van Der Veen et al., 2016; Zhao et al., 2019). Moreover, such techniques lead to abnormal behaviors in the attacked system (e.g., suspicious cache activity of processes), which may be detected by some physical detection-based defenses (Gruss et al., 2018). As such, minimizing the number of bit-flips is critical to make the attack more efficient and practical.
Overall Objective. In conclusion, the final objective function is as follows:
min B̂s,B̂t
L1(B̂s, B̂t) + λL2(B̂s, B̂t),
s.t. B̂s ∈ {0, 1}C×Q, B̂t ∈ {0, 1}C×Q, dH(Bs, B̂s) + dH(Bt, B̂t) ≤ k, (4)
where dH(·, ·) denotes the Hamming distance and λ > 0 is a trade-off parameter. For the sake of brevity, Bs and Bt are concatenated and further reshaped to the vector b ∈ {0, 1}2CQ. Similarly, B̂s and B̂t are concatenated and further reshaped to the vector b̂ ∈ {0, 1}2CQ. Besides, for binary vector b and b̂, there exists a nice relationship between Hamming distance and Euclidean distance: dH(b, b̂) = ||b− b̂||22. The new formulation of the objective is as follows:
min b̂
L1(b̂) + λL2(b̂), s.t. b̂ ∈ {0, 1}2CQ, ||b− b̂||22 − k ≤ 0. (5)
Problem (5) is denoted as TA-LBF (targeted attack with limited bit-flips). Note that TA-LBF is a binary integer programming (BIP) problem, whose optimization is challenging. We will introduce an effective and efficient method to solve it in the following section.
3.3 AN EFFECTIVE OPTIMIZATION METHOD FOR TA-LBF
To solve the challenging BIP problem (5), we adopt the generic solver for integer programming, dubbed `p-Box ADMM (Wu & Ghanem, 2018). The solver presents its superior performance in many tasks, e.g., model pruning (Li et al., 2019), clustering (Bibi et al., 2019), MAP inference (Wu et al., 2020a), adversarial attack (Fan et al., 2020), etc.. It proposed to replace the binary constraint equivalently by the intersection of two continuous constraints, as follows
b̂ ∈ {0, 1}2CQ ⇔ b̂ ∈ (Sb ∩ Sp), (6)
where Sb = [0, 1]2CQ indicates the box constraint, and Sp = {b̂ : ||b̂ − 12 || 2 2 = 2CQ 4 } denotes the `2-sphere constraint. Utilizing (6), Problem (5) is equivalently reformulated as
min b̂,u1∈Sb,u2∈Sp,u3∈R+
L1(b̂) + λL2(b̂), s.t. b̂ = u1, b̂ = u2, ||b− b̂||22 − k + u3 = 0, (7)
where two extra variables u1 and u2 are introduced to split the constraintsw.r.t. b̂. Besides, the nonnegative slack variable u3 ∈ R+ is used to transform ||b−b̂||22−k ≤ 0 in (5) into ||b−b̂||22−k+u3 = 0. The above constrained optimization problem can be efficiently solved by the alternating direction method of multipliers (ADMM) (Boyd et al., 2011).
Following the standard procedure of ADMM, we firstly present the augmented Lagrangian function of the above problem, as follows:
L(b̂,u1,u2, u3, z1, z2, z3) =L1(b̂) + λL2(b̂) + z>1 (b̂− u1) + z>2 (b̂− u2) +z3(||b− b̂||22 − k + u3) + c1(u1) + c2(u2) + c3(u3)
+ ρ1 2 ||b̂− u1||22 + ρ2 2 ||b̂− u2||22 + ρ3 2
(||b− b̂||22 − k + u3)2, (8)
where z1, z2 ∈ R2CQ and z3 ∈ R are dual variables, and ρ1, ρ2, ρ3 > 0 are penalty factors, which will be specified later. c1(u1) = I{u1∈Sb}, c2(u2) = I{u2∈Sp}, and c3(u3) = I{u3∈R+} capture the constraints Sb,Sp and R+, respectively. The indicator function I{a} = 0 if a is true; otherwise, I{a} = +∞. Based on the augmented Lagrangian function, the primary and dual variables are updated iteratively, with r indicating the iteration index.
Given (b̂r, zr1 , zr2 , zr3), update (u r+1 1 ,u r+1 2 , u r+1 3 ). Given (b̂r, zr1 , zr2 , zr3), (u1,u2, u3) are independent, and they can be optimized in parallel, as follows ur+11 = arg min u1∈Sb (zr1) >(b̂r − u1) + ρ12 ||b̂ r − u1||22 = PSb(b̂r + zr1 ρ1 ), ur+12 = arg min u2∈Sp (zr2) >(b̂r − u2) + ρ22 ||b̂ r − u2||22 = PSp(b̂r + zr2 ρ2 ), ur+13 = arg min u3∈R+ zr3(||b− b̂r||22 − k + u3) + ρ3 2 (||b− b̂ r||22 − k + u3)2
= PR+(−||b− b̂r||22 + k − zr3 ρ3 ),
(9)
where PSb(a) = min((1,max(0,a)) with a ∈ Rn is the projection onto the box constraint Sb; PSp(a) = √ n 2 ā ||a|| + 1 2 with ā = a − 1 2 indicates the projection onto the `2-sphere constraint Sp (Wu & Ghanem, 2018); PR+(a)=max(0, a) with a∈R indicates the projection onto R+.
Given (ur+11 ,u r+1 2 , u r+1 3 , z r 1 , z r 2 , z r 3), update b̂r+1. Although there is no closed-form solution to b̂r+1, it can be easily updated by the gradient descent method, as both L1(b̂) and L2(b̂) are differentiable w.r.t. b̂, as follows
b̂r+1 ← b̂r − η · ∂L(b̂,u r+1 1 ,u r+1 2 , u r+1 3 , z r 1 , z r 2 , z r 3)
∂b̂
∣∣∣ b̂=b̂r , (10)
where η > 0 denotes the step size. Note that we can run multiple steps of gradient descent in the above update. Both the number of steps and η will be specified in later experiments. Besides, due to the space limit, the detailed derivation of ∂L/∂b̂ will be presented in Appendix A.
Given (b̂r+1,ur+11 ,u r+1 2 , u r+1 3 ), update (z r+1 1 , z r+1 2 , z r+1 3 ). The dual variables are updated by the gradient ascent method, as follows zr+11 = z r 1 + ρ1(b̂ r+1 − ur+11 ), zr+12 = z r 2 + ρ2(b̂
r+1 − ur+12 ), zr+13 = z r 3 + ρ3(||b− b̂r+1||22 − k + ur+13 ).
(11)
Remarks. 1) Note that since (ur+11 ,u r+1 2 , u r+1 3 ) are updated in parallel, their updates belong to the same block. Thus, the above algorithm is a two-block ADMM algorithm. We provide the algorithm outline in Appendix B. 2) Except for the update of b̂r+1, all other updates are very simple and efficient. The computational cost of the whole algorithm will be analyzed in Appendix C. 3) Due to the inexact solution to b̂r+1 using gradient descent, the theoretical convergence of the whole ADMM algorithm cannot be guaranteed. However, as demonstrated in many previous works (Gol’shtein & Tret’yakov, 1979; Eckstein & Bertsekas, 1992; Boyd et al., 2011), the inexact two-block ADMM often shows good practical convergence, which is also the case in our later experiments. Besides, the numerical convergence analysis is presented in Appendix D. 4) The proper adjustment of (ρ1, ρ2, ρ3) could accelerate the practical convergence, which will be specified later .
4 EXPERIMENTS
4.1 EVALUATION SETUP
Settings. We compare our method (TA-LBF) with GDA (Liu et al., 2017a), FSA (Zhao et al., 2019), T-BFA (Rakin et al., 2020b), and TBT (Rakin et al., 2020a). All those methods can be adopted to misclassify a specific image into a target class. We also take the fine-tuning (FT) of the last fully-connected layer as a baseline method. We conduct experiments on CIFAR-10 (Krizhevsky et al., 2009) and ImageNet (Russakovsky et al., 2015). We randomly select 1,000 images from each dataset as the evaluation set for all methods. Specifically, for each of the 10 classes in CIFAR-10, we perform attacks on the 100 randomly selected validation images from the other 9 classes. For ImageNet, we randomly choose 50 target classes. For each target class, we perform attacks on 20 images randomly selected from the rest classes in the validation set. Besides, for all methods except GDA which does not employ auxiliary samples, we provide 128 and 512 auxiliary samples on CIFAR-10 and ImageNet, respectively. Following the setting in (Rakin et al., 2020a;b), we adopt the quantized ResNet (He et al., 2016) and VGG (Simonyan & Zisserman, 2015) as the target models. For our TA-LBF, the trade-off parameter λ and the constraint parameter k affect the attack stealthiness and the attack success rate. We adopt a strategy for jointly searching λ and k, which is specified in Appendix E.3. More descriptions of our settings are provided in Appendix E.
Evaluation Metrics. We adopt three metrics to evaluate the attack performance, i.e., the post attack accuracy (PA-ACC), the attack success rate (ASR), and the number of bit-flips (Nflip). PA-ACC denotes the post attack accuracy on the validation set except for the specific attacked sample and the auxiliary samples. ASR is defined as the ratio of attacked samples that are successfully attacked into the target class among all 1,000 attacked samples. Nflip is the number of bit-flips required for an attack. A better attack performance corresponds to a higher PA-ACC and ASR, while a lower Nflip. Besides, we also show the accuracy of the original model, denoted as ACC.
4.2 MAIN RESULTS
Results on CIFAR-10. The results of all methods on CIFAR-10 are shown in Table 1. Our method achieves a 100% ASR with the fewest Nflip for all the bit-widths and architectures. FT modifies the maximum number of bits among all methods since there is no limitation of parameter modifications. Due to the absence of the training data, the PA-ACC of FT is also poor. These results indicate that fine-tuning the trained DNN as an attack method is infeasible. Although T-BFA flips the secondfewest bits under three cases, it fails to achieve a higher ASR than GDA and FSA. In terms of PA-ACC, TA-LBF is comparable to other methods. Note that the PA-ACC of TA-LBF significantly outperforms that of GDA, which is the most competitive w.r.t. ASR and Nflip among all the baseline methods. The PA-ACC of GDA is relatively poor, because it does not employ auxiliary samples. Achieving the highest ASR, the lowest Nflip, and the comparable PA-ACC demonstrates that our optimization-based method is more superior than other heuristic methods (TBT, T-BFA and GDA).
Results on ImageNet. The results on ImageNet are shown in Table 1. It can be observed that GDA shows very competitive performance compared to other methods. However, our method obtains the highest PA-ACC, the fewest bit-flips (less than 8), and a 100% ASR in attacking ResNet. For VGG, our method also achieves a 100% ASR with the fewest Nflip for both bit-widths. The Nflip results of our method are mainly attributed to the cardinality constraint on the number of bit-flips. Moreover, for our method, the average PA-ACC degradation over four cases on ImageNet is only 0.06%, which demonstrates the stealthiness of our attack. When comparing the results of ResNet and VGG, an interesting observation is that all methods require significantly more bit-flips for VGG. One reason is that VGG is much wider than ResNet. Similar to the claim in (He et al., 2020), increasing the network width contributes to the robustness against the bit-flip based attack.
4.3 RESISTANCE TO DEFENSE METHODS
Resistance to Piece-wise Clustering. He et al. (2020) proposed a novel training technique, called piece-wise clustering, to enhance the network robustness against the bit-flip based attack. Such a training technique introduces an additional weight penalty to the inference loss, which has the effect of eliminating close-to-zero weights (He et al., 2020). We test the resistance of all attack methods to the piece-wise clustering. We conduct experiments with the 8-bit quantized ResNet on CIFAR-10 and ImageNet. Following the ideal configuration in (He et al., 2020), the clustering coefficient, which is a hyper-parameter of piece-wise clustering, is set to 0.001 in our evaluation. For our method, the initial k is set to 50 on ImageNet and the rest settings are the same as those in Section 4.1. Besides the three metrics in Section 4.1, we also present the number of increased Nflip compared to the model without defense (i.e., results in Table 1), denoted as ∆Nflip.
The results of the resistance to the piece-wise clustering of all attack methods are shown in Table 2. It shows that the model trained with piece-wise clustering can improve the number of required bit-flips for all attack methods. However, our method still achieves a 100% ASR with the least number of bit-flips on both two datasets. Although TBT achieves a smaller ∆Nflip than ours on CIFAR-10, its ASR is only 52.3%, which also verifies the defense effectiveness of the piece-wise clustering. Compared with other methods, TA-LBF achieves the fewest ∆Nflip on ImageNet and the best PA-ACC on both datasets. These results demonstrate the superiority of our method over other methods when attacking models trained with piece-wise clustering.
Resistance to Larger Model Capacity. Previous studies (He et al., 2020; Rakin et al., 2020b) observed that increasing the network capacity can improve the robustness against the bit-flip based attack. Accordingly, we evaluate all attack methods against the models with a larger capacity using the 8-bit quantized ResNet on both datasets. Similar to the strategy in (He et al., 2020), we increase the model capacity by varying the network width (i.e., 2× width in our experiments). All settings of our method are the same as those used in Section 4.1.
The results are presented in Table 2. We observe that all methods require more bit-flips to attack the model with the 2× width. To some extent, it demonstrates that the wider network with the same architecture is more robust against the bit-flip based attack. However, our method still achieves a 100% ASR with the fewest Nflip and ∆Nflip. Moreover, when comparing the two defense methods, we find that piece-wise clustering performs better than the model with a larger capacity in terms of ∆Nflip. However, piece-wise clustering training also causes the accuracy decrease of the original model (e.g., from 92.16% to 91.01% on CIFAR-10). We provide more results in attacking models with defense under different settings in Appendix F.
4.4 ABLATION STUDY
We perform ablation studies on parameters λ and k, and the number of auxiliary samplesN . We use the 8-bit quantized ResNet on CIFAR-10 as the representative for analysis. We discuss the attack performance of TA-LBF under different values of λ while k is fixed at 20, and under different values of k while λ is fixed at 10. To analyze the effect ofN , we configureN from 25 to 800 and keep other settings the same as those in Section 4.1. The results are presented in Fig. 2. We observe that our method achieves a 100% ASR when λ is less than 20. As expected, the PA-ACC increases while the ASR decreases along with the increase of λ. The plot of parameter k presents that k can exactly limit the number of bit-flips, while other attack methods do not involve such constraint. This advantage is critical since it allows the attacker to identify limited bits to perform an attack when the budget is fixed. As shown in the figure, the number of auxiliary samples less than 200 has a marked positive impact on the PA-ACC. It’s intuitive that more auxiliary samples can lead to a better PA-ACC. The observation also indicates that TA-LBF still works well without too many auxiliary samples.
4.5 VISUALIZATION OF DECISION BOUNDARY
To further compare FSA and GDA with our method, we visualize the decision boundaries of the original and the post attack models in Fig. 3. We adopt a four-layer Multi-Layer Perceptron trained with the simulated 2-D Blob dataset from 4 classes. The original decision boundary indicates that the original model classifies all data points almost perfectly. The attacked sample is classified into Class 3 by all methods. Visually, GDA modifies the decision boundary drastically, especially for Class 0. However, our method modifies the decision boundary mainly around the attacked sample. Althoug FSA is comparable to ours visually in Fig. 3, it flips 10× bits than GDA and TA-LBF. In terms of the numerical results, TA-LBF achieves the best PA-ACC and the fewest Nflip. This finding verifies that our method can achieve a successful attack even only tweaking the original classifier.
5 CONCLUSION
In this work, we have presented a novel attack paradigm that the weights of a deployed DNN can be slightly changed via bit flipping in the memory, to give a target prediction for a specific sample, while the predictions on other samples are not significantly influenced. Since the weights are stored as binary bits in the memory, we formulate this attack as a binary integer programming (BIP) problem, which can be effectively and efficiently solved by a continuous algorithm. Since the critical bits are determined through optimization, the proposed method can achieve the attack goals by flipping a few bits, and it shows very good performance under different experimental settings.
ACKNOWLEDGMENTS
This work is supported in part by the National Key Research and Development Program of China under Grant 2018YFB1800204, the National Natural Science Foundation of China under Grant 61771273, the R&D Program of Shenzhen under Grant JCYJ20180508152204044. Baoyuan Wu is supported by the Natural Science Foundation of China under grant No. 62076213, and the university development fund of the Chinese University of Hong Kong, Shenzhen under grant No. 01001810.
B ALGORITHM OUTLINE
Algorithm 1 Continuous optimization for the BIP problem (5). Input: The original quantized DNN model f with weights Θ,B, attacked sample x with groundtruth label s, target class t, auxiliary sample set {(xi, yi)}Ni=1, hyper-parameters λ, k, and δ. Output: b̂.
1: Initial u01, u 0 2, u 0 3, z 0 1 , z 0 2 , z 0 3 , b̂ 0 and let r ← 0; 2: while not converged do 3: Update ur+11 , u r+1 2 and u r+1 3 as Eq. (9); 4: Update b̂r+1 as Eq. (10); 5: Update zr+11 , z r+1 2 and z r+1 3 as Eq. (11); 6: r ← r + 1. 7: end while
C COMPLEXITY ANALYSIS
The computational complexity of the proposed algorithm (i.e., Algorithm 1) consists of two parts, the forward and backward pass. In terms of the forward pass, since Θ,B{1,...,K}\{s,t} are fixed during the optimization, their involved terms, including g(x; Θ) and p(x; Θ,Bi)|i 6=s,t, are calculated only one time. The main cost from B̂s and B̂t is O(2(N + 1)C2Q) per iteration, as there are N + 1 samples. In terms of the backward pass, the main cost is from the update of b̂r+1, which is O(2(N + 1)CQ) per iteration in the gradient descent. Since all other updates are very simple, their costs are omitted here. Thus, the overall computational cost is O ( Touter[2(N + 1)CQ · (C + Tinner)] ) , with Touter being the iteration of the overall algorithm and Tinner indicating the number of gradient steps in updating b̂r+1. As shown in Section D, the proposed method TA-LBF always converges very fast in our experiments, thus Touter is not very large. As demonstrated in Section E.3, Tinner is set to 5 in our experiments. In short, the proposed method can be optimized very efficiently.
Besides, we also compare the computational complexity of different attacks empirically. Specifically, we compare the running time of attacking one image of different methods against the 8-bit quantized ResNet on CIFAR-10 and ImageNet dataset. As shown in Table 3, TBT is the most timeconsuming method among all attacks. Although the proposed TA-LBF is not superior to T-BFA, FSA, and GDA in running time, this gap can be tolerated when attacking a single image in the deployment stage. Besides, our method performs better in terms of PA-ACC, ASR, and Nflip as demonstrated in our experiments.
D NUMERICAL CONVERGENCE ANALYSIS
We present the numerical convergence of TA-LBF in Fig. 4. Note that ||b̂ − u1||22 and ||b̂ − u2||22 characterize the degree of satisfaction of the box and `2-sphere constraint, respectively. For the two examples of CIFAR-10 and ImageNet, the values of both indicators first increase, then drop, and finally close to 0. Another interesting observation is that L1 + λL2 first decreases evidently and then increases slightly. Such findings illustrate the optimization process of TA-LBF. In the early iterations, modifying the model parameters tends to achieve the two goals mentioned in Section 3.1; in the late iterations, b̂ is encouraged to satisfy the box and l2-sphere constraint. We also observe that both examples stop when meeting ||b̂ − u1||22 ≤ 10−4 and ||b̂ − u2||22 ≤ 10−4 and do not
exceed the maximum number of iterations (i.e., 2000). The numerical results demonstrate the fast convergence of our method in practice.
E EVALUATION SETUP
E.1 BASELINE METHODS
Since GDA (Liu et al., 2017a) and FSA (Zhao et al., 2019) are originally designed for attacking the full-precision network, we adapt these two methods to attack the quantized network by applying quantization-aware training (Jacob et al., 2018). We adopt the `0-norm for FSA (Liu et al., 2017a) and modification compression for GDA (Zhao et al., 2019) to reduce the number of the modified parameters. Among three types of T-BFA (Rakin et al., 2020b), we compare to the most comparable method: the 1-to-1 stealthy attack scheme. The purpose of this attack scheme is to misclassify samples of a single source class into the target class while maintaining the prediction accuracy of other samples. Besides, we take the fine-tuning (FT) of the last fully-connected layer as a basic attack and present its results. We perform attack once for each selected image except TBT (Rakin et al., 2020a) and totally 1,000 attacks on each dataset. The attack objective of TBT is that the attacked DNN model misclassifies all inputs with a trigger to a certain target class. Due to such objective, the number of attacks for TBT is equal to the number of target classes (i.e., 10 attacks on CIFAR-10 and 50 attacks on ImageNet).
E.2 TARGET MODELS
According to the setting in (Rakin et al., 2020a;b), we adopt two popular network architectures: ResNet (He et al., 2016) and VGG (Simonyan & Zisserman, 2015) for evaluation. On CIFAR-10, we perform experiments on ResNet-20 and VGG-16. On ImageNet, we use the pre-trained ResNet18* and VGG-16† network. We quantize all networks to the 4-bit and 8-bit quantization level using the layer-wise uniform weight quantization scheme, which is similar to the one involved in the Tensor-RT solution (Migacz, 2017).
E.3 PARAMETER SETTINGS OF TA-LBF
For each attack, we adopt a strategy for jointly searching λ and k. Specifically, for an initially given k, we search λ from a relatively large initial value and divide it by 2 if the attack does not succeed. The maximum search times of λ for a fixed k is set to 8. If it exceeds the maximum search times,
*Downloaded from https://download.pytorch.org/models/resnet18-5c106cde.pth †Downloaded from https://download.pytorch.org/models/vgg16_bn-6c64b313.pth
we double k and search λ from the relatively large initial value. The maximum search times of k is set to 4. On CIFAR-10, the initial k and λ are set to 5 and 100. On ImageNet, λ is initialized as 104; k is initialized as 5 and 50 for ResNet and VGG, respectively. On CIFAR-10, the δ in L1 is set to 10. On ImageNet, the δ is set to 3 and increased to 10 if the attack fails. u1 and u2 are initialized as b and u3 is initialized as 0. z1 and z2 are initialized as 0 and z3 is initialized as 0. b̂ is initialized as b. During each iteration, the number of gradient steps for updating b̂ is 5 and the step size is set to 0.01 on both datasets. Hyper-parameters (ρ1, ρ2, ρ3) (see Eq. (11)) are initialized as (10−4, 10−4, 10−5) on both datasets, and increase by ρi ← ρi×1.01, i = 1, 2, 3 after each iteration. The maximum values of (ρ1, ρ2, ρ3) are set to (50, 50, 5) on both datasets. Besides the maximum number of iterations (i.e., 2000), we also set another stopping criterion, i.e., ||b̂−u1||22 ≤ 10−4 and ||b̂− u2||22 ≤ 10−4.
F MORE RESULTS ON RESISTANCE TO DEFENSE METHODS
F.1 RESISTANCE TO PIECE-WISE CLUSTERING
We conduct experiments using the 8-bit quantized ResNet on CIFAR-10 with different clustering coefficients. We set the maximum search times of k to 5 for clustering coefficient 0.005 and 0.01 and keep the rest settings the same as those in Section 4.1. The results are presented in Table 4. As shown in the table, all values of Nflip are larger than attacking models without defense for all methods, which is similar to Table 2. Our method achieves a 100% ASR with the fewest Nflip under the three clustering coefficients. Although TBT obtains a smaller ∆Nflip than our method, it fails to achieve a satisfactory ASR. For example, TBT achieves only a 10.1% ASR when the clustering coefficient is set to 0.01. We observe that for all clustering coefficients, piece-wise clustering reduces the original accuracy. Such a phenomenon is more significant as the clustering coefficient increases. The results also show that there is no guarantee that if the clustering coefficient is larger (e.g., 0.01), the model is more robust, which is consistent with the finding in (He et al., 2020).
F.2 RESISTANCE TO LARGER MODEL CAPACITY
Besides the results of networks with a 2× width shown in Section 4.3, we also evaluate all methods against models with a 3× and 4× width. All settings are the same as those used in Section 4.1. The results are provided in Table 5. Among all attack methods, our method is least affected by increasing the network width. Especially for the network with a 4× width, our ∆Nflip is only 2.80. The results demonstrate the superiority of the formulated BIP problem and optimization. Moreover, compared with piece-wise clustering, having a larger model capacity can improve the original accuracy, but increases the model size and the computation complexity.
G DISCUSSIONS
G.1 COMPARING BACKDOOR, ADVERSARIAL, AND WEIGHT ATTACK
An attacker can achieve malicious purposes utilizing backdoor, adversarial, and weight attacks. In this section, we emphasize the differences among them.
Backdoor attack happens in the training stage and requires that the attacker can tamper the training data even the training process (Liu et al., 2020b; Li et al., 2020). Through poisoning some training samples with a trigger, the attacker can control the behavior of the attacked DNN in the inference stage. For example, images with reflections are misclassified into a target class, while benign images are classified normally (Liu et al., 2020a). However, such an attack paradigm causes the accuracy degradation on benign samples, which makes it detectable for users. Besides, these methods also require to modify samples in the inference stage, which is sometimes impossible for the attacker. Many defense methods against backdoor attack have been proposed, such as the preprocessingbased defense (Liu et al., 2017b), the model reconstruction-based defense (Liu et al., 2018a), and the trigger synthesis-based defense (Wang et al., 2019).
Adversarial attack modifies samples in the inference stage by adding small perturbations that remain imperceptible to the human vision system (Akhtar & Mian, 2018). Since adversarial attack only modifies inputs while keeping the model unchanged, it has no effect on the benign samples. Besides the basic white-box attack, the black-box attack (Wu et al., 2020b; Chen et al., 2020) and universal attack (Zhang et al., 2020b;a) have attracted wide attention. Inspired by its success in the classification, it also has been extended to other tasks, including image captioning (Xu et al., 2019), retrieval (Bai et al., 2020; Feng et al., 2020), etc.. Similarly, recent studies have demonstrated many defense methods against adversarial attack, including the preprocessing-based defense (Xie et al., 2018), the detection-based defense (Xu et al., 2017), and the adversarial learning-based defense (Carmon et al., 2019; Wu et al., 2020c).
Weight attack modifies model parameters in the deployment stage, which is the studied paradigm in this work. Weight attack generally aims at misleading the DNN model on the selected sample(s), while having a minor effect on other samples (Zhao et al., 2019; Rakin et al., 2020b). Many studies (Yao et al., 2020; Breier et al., 2018; Pan, 2020) have demonstrated that the DNN parameters can be modified in the bit-level in memory using fault injection techniques (Agoyan et al., 2010; Kim et al., 2014; Selmke et al., 2015) in practice. Note that the defense methods against weight attack have been not well studied. Although some defense methods (He et al., 2020) were proposed, they cannot achieve satisfactory performance. For example, our method can still achieve a 100% attack success rate against two proposed defense methods. Our work would encourage further investigation on the security of the model parameters from both attack and defense sides.
G.2 COMPARING TA-LBF WITH OTHER WEIGHT ATTACKS
We compare our TA-LBF with other weight attack methods, including TBT (Rakin et al., 2020a), TBFA (Rakin et al., 2020b), GDA (Liu et al., 2017a), and FSA (Zhao et al., 2019) in this section. TBT tampers both the test sample and the model parameters. Specifically, it first locates critical bits and generates a trigger, and then flips these bits to classify all inputs embedded with the trigger to a target class. However, the malicious samples are easily detected by human inspection or many detection methods (Tran et al., 2018; Du et al., 2020). We do not modify the samples to perform TA-LBF, which makes the attack more stealthy. Rakin et al. (2020b) proposed T-BFA which misclassifies all samples (N-to-1 version) or samples from a source class (1-to-1 version) into a target class. Our method aims at misclassifying a specific sample, which meets the attacker’s requirement in some scenarios. For example, the attacker wants to manipulate the behavior of a face recognition engine on a specific input. Since it affects multiple samples, T-BFA maybe not stealthy enough in attacking real-world applications. GDA (Liu et al., 2017a) and FSA (Zhao et al., 2019) modify model parameters at the weight-level rather than bit-level. They are designed for misclassifying multiple samples from arbitrary classes, which makes it infeasible for them to only modify the parameters connected to the source and target class. They modify more parameters than our method as shown in the experiments, it might be due to the reason discussed above. Besides, TBT, T-BFA, and GDA determine the critical weights to modify using heuristic strategies, while our TA-LBF adopts optimization-based methods. Although FSA applies ADMM for solving the optimization problem, it has no explicit constraint to control the number of modified parameters, which makes it intends to modify more parameters than GDA and our TA-LBF.
H TRADE-OFF BETWEEN THREE EVALUATION METRICS
In this section, we investigate the trade-off between three adopted evaluation metrics (i.e., PA-ACC, ASR, and Nflip) for our attack. All experiments are conducted on CIFAR-10 and ImageNet dataset in attacking the 8-bit quantized ResNet.
We firstly discuss the trade-off between PA-ACC and Nflip by fixing the ASR as 100% using the search strategy in Appendix E.3 and adjusting the initial λ and k to obtain different attack results. The two curves on the left show that increasing the Nflip can improve the PA-ACC when Nflip is relatively small; the PA-ACC decreases with the increase of Nflip when Nflip is greater than a threshold. This phenomenon demonstrates that constraining the number of bit-flips is essential to ensure the attack stealthiness, as mentioned in Section 3.2. To study the trade-off between PA-ACC
and ASR, we fix the parameter k as 10 for approximately 10 bit-flips and adjust the parameter λ to obtain different PA-ACC and ASR results. The trade-off curves between PA-ACC and ASR show that increasing ASR can decrease the PA-ACC significantly. Therefore, how to achieve high ASR and PA-ACC simultaneously is still an important open problem. | 1. What is the focus of the paper, and how does it contribute to the field of adversarial attacks?
2. What are the strengths of the proposed optimization method, particularly in comparison to previous heuristic approaches?
3. Are there any limitations or areas for improvement regarding the method's performance, especially when considering its applicability to real-world scenarios?
4. How does the reviewer assess the clarity and effectiveness of the presentation, specifically regarding the use of tables and potential alternative visualizations?
5. What additional information or analysis would enhance the understanding and impact of the paper's findings? | Review | Review
This paper proposes an ADMM based optimization method to conduct adversarial weight attack, and achieves superior or at least comparable performance compared with previous heuristic methods.
Pros:
Adversarial weight attack is an interesting research direction with important practical importance and deserve more studies.
The proposed method is mathematically sound. And it empirically outperforms or at least is comparable with previous state-of-the-art methods on undefended models, and consistently outperforms previous methods on defended models.
Cons: I think this paper as an necessary step towards stronger adversarial weight attacks, which could be used as an evaluation method to benchmark future defense methods.
Some comments:
Table 1 and 2 may not be the best way to present the results. Considering there are three evaluation dimensions (PA-ACC, ASR and Nflip), I suggest the authors to add some pareto frontier figures. For example, fixing PA-ACC, plot the tradeoff curves between ASR and Nflip of different methods.
What are the time costs of different attacking methods? |
ICLR | Title
Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits
Abstract
To explore the vulnerability of deep neural networks (DNNs), many attack paradigms have been well studied, such as the poisoning-based backdoor attack in the training stage and the adversarial attack in the inference stage. In this paper, we study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes. Specifically, our goal is to misclassify a specific sample into a target class without any sample modification, while not significantly reduce the prediction accuracy of other samples to ensure the stealthiness. To this end, we formulate this problem as a binary integer programming (BIP), since the parameters are stored as binary bits (i.e., 0 and 1) in the memory. By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem, which can be effectively and efficiently solved using the alternating direction method of multipliers (ADMM) method. Consequently, the flipped critical bits can be easily determined through optimization, rather than using a heuristic strategy. Extensive experiments demonstrate the superiority of our method in attacking DNNs. The code is available at: https://github.com/jiawangbai/TA-LBF.
1 INTRODUCTION
Due to the great success of deep neural networks (DNNs), its vulnerability (Szegedy et al., 2014; Gu et al., 2019) has attracted great attention, especially for security-critical applications (e.g., face recognition (Dong et al., 2019) and autonomous driving (Eykholt et al., 2018)). For example, backdoor attack (Saha et al., 2020; Xie et al., 2019) manipulates the behavior of the DNN model by mainly poisoning some training data in the training stage; adversarial attack (Goodfellow et al., 2015; Moosavi-Dezfooli et al., 2017) aims to fool the DNN model by adding malicious perturbations onto the input in the inference stage.
Compared to the backdoor attack and adversarial attack, a novel attack paradigm, dubbed weight attack (Breier et al., 2018), has been rarely studied. It assumes that the attacker has full access to the memory of a device, such that he/she can directly change the parameters of a deployed model to achieve some malicious purposes (e.g., crushing a fully functional DNN and converting it to a random output generator (Rakin et al., 2019)). Since weight attack neither modifies the input nor control the training process, both the service provider and the user are difficult to realize the existence of the attack. In practice, since the deployed DNN model is stored as binary bits in the memory, the attacker can modify the model parameters using some physical fault injection techniques, such as Row Hammer Attack (Agoyan et al., 2010; Selmke et al., 2015) and Laser Beam Attack (Kim et al., 2014). These techniques can precisely flip any bit of the data in the memory. Some previous works (Rakin et al., 2019; 2020a;b) have demonstrated that it is feasible to change the model weights via bit flipping to achieve some malicious purposes. However, the critical bits are identified mostly
†This work was done when Jiawang Bai was an intern at Tencent AI Lab. Correspondence to: Baoyuan Wu (wubaoyuan@cuhk.edu.cn) and Shu-Tao Xia (xiast@sz.tsinghua.edu.cn).
using some heuristic strategies in their methods. For example, Rakin et al. (2019) combined gradient ranking and progressive search to identify the critical bits for flipping.
This work also focuses on the bit-level weight attack against DNNs in the deployment stage, whereas with two different goals, including effectiveness and stealthiness. The effectiveness requires that the attacked model can misclassify a specific sample to a attacker-specified target class without any sample modification, while the stealthiness encourages that the prediction accuracy of other samples will not be significantly reduced. As shown in Fig. 1, to achieve these goals, we propose to identify and flip bits that are critical to the prediction of the specific sample but not significantly impact the prediction of other samples. Specifically, we treat each bit in the memory as a binary variable, and our task is to determine its state (i.e., 0 or 1). Accordingly, it can be formulated as a binary integer programming (BIP) problem. To further improve the stealthiness, we also limit the number of flipped bits, which can be formulated as a cardinality constraint. However, how to solve the BIP problem with a cardinality constraint is a challenging problem. Fortunately, inspired by an advanced optimization method, the `p-box ADMM (Wu & Ghanem, 2018), this problem can be reformulated as a continuous optimization problem, which can further be efficiently and effectively solved by the alternating direction method of multipliers (ADMM) (Glowinski & Marroco, 1975; Gabay & Mercier, 1976). Consequently, the flipped bits can be determined through optimization rather than the original heuristic strategy, which makes our attack more effective. Note that we also conduct attack against the quantized DNN models, following the setting in some related works (Rakin et al., 2019; 2020a). Extensive experiments demonstrate the superiority of the proposed method over several existing weight attacks. For example, our method achieves a 100% attack success rate with 7.37 bit-flips and 0.09% accuracy degradation of the rest unspecific inputs in attacking a 8-bit quantized ResNet-18 model on ImageNet. Moreover, we also demonstrate that the proposed method is also more resistant to existing defense methods.
The main contributions of this work are three-fold. 1) We explore a novel attack scenario where the attacker enforces a specific sample to be predicted as a target class by modifying the weights of a deployed model via bit flipping without any sample modification. 2) We formulate the attack as a BIP problem with the cardinality constraint and propose an effective and efficient method to solve this problem. 3) Extensive experiments verify the superiority of the proposed method against DNNs with or without defenses.
2 RELATED WORKS
Neural Network Weight Attack. How to perturb the weights of a trained DNN for malicious purposes received extensive attention (Liu et al., 2017a; 2018b; Hong et al., 2019). Liu et al. (2017a) firstly proposed two schemes to modify model parameters for misclassification without and with considering stealthiness, which is dubbed single bias attack (SBA) and gradient descent
attack (GDA) respectively. After that, Trojan attack (Liu et al., 2018b) was proposed, which injects malicious behavior to the DNN by generating a general trojan trigger and then retraining the model. This method requires to change lots of parameters. Recently, fault sneaking attack (FSA) (Zhao et al., 2019) was proposed, which aims to misclassify certain samples into a target class by modifying the DNN parameters with two constraints, including maintaining the classification accuracy of other samples and minimizing parameter modifications. Note that all those methods are designed to misclassify multiple samples instead of a specific sample, which may probably modify lots of parameters or degrade the accuracy of other samples sharply.
Bit-Flip based Attack. Recently, some physical fault injection techniques (Agoyan et al., 2010; Kim et al., 2014; Selmke et al., 2015) were proposed, which can be adopted to precisely flip any bit in the memory. Those techniques promote researchers to study how to modify model parameters at the bit-level. As a branch of weight attack, the bit-flip based attack was firstly explored in (Rakin et al., 2019). It proposed an untargeted attack that can convert the attacked DNN to a random output generator with several bit-flips. Besides, Rakin et al. (2020a) proposed the targeted bit Trojan (TBT) to inject the fault into DNNs by flipping some critical bits. Specifically, the attacker flips the identified bits to force the network to classify all samples embedded with a trigger to a certain target class, while the network operates with normal inference accuracy with benign samples. Most recently, Rakin et al. (2020b) proposed the targeted bit-flip attack (T-BFA), which achieves malicious purposes without modifying samples. Specifically, T-BFA can mislead samples from single source class or all classes to a target class by flipping the identified weight bits. It is worth noting that the above bit-flip based attacks leverage heuristic strategies to identify critical weight bits. How to find critical bits for the bit-flip based attack method is still an important open question.
3 TARGETED ATTACK WITH LIMITED BIT-FLIPS (TA-LBF)
3.1 PRELIMINARIES
Storage and Calculation of Quantized DNNs. Currently, it is a widely-used technique to quantize DNNs before deploying on devices for efficiency and reducing storage size. For each weight in l-th layer of a Q-bit quantized DNN, it will be represented and then stored as the signed integer in two’s complement representation (v = [vQ; vQ−1; ...; v1] ∈ {0, 1}Q) in the memory. Attacker can modify the weights of DNNs through flipping the stored binary bits. In this work, we adopt the layer-wise uniform weight quantization scheme similar to Tensor-RT (Migacz, 2017). Accordingly, each binary vector v can be converted to a real number by a function h(·), as follow:
h(v) = (−2Q−1 · vQ + Q−1∑ i=1 2i−1 · vi) ·∆l, (1)
where l indicates which layer the weight is from, ∆l > 0 is a known and stored constant which represents the step size of the l-th layer weight quantizer.
Notations. We denote a Q-bit quantized DNN-based classification model as f : X → Y , where X ∈ Rd being the input space and Y ∈ {1, 2, ...,K} being the K-class output space. Assuming that the last layer of this DNN model is a fully-connected layer with B ∈ {0, 1}K×C×Q being the quantized weights, where C is the dimension of last layer’s input. Let Bi,j ∈ {0, 1}Q be the two’s complement representation of a single weight and Bi ∈ {0, 1}C×Q denotes all the binary weights connected to the i-th output neuron. Given a test sample x with the ground-truth label s, f(x; Θ,B) ∈ [0, 1]K is the output probability vector and g(x; Θ) ∈ RC is the input of the last layer, where Θ denotes the model parameters without the last layer.
Attack Scenario. In this paper, we focus on the white-box bit-flip based attack, which was first introduced in (Rakin et al., 2019). Specifically, we assume that the attacker has full knowledge of the model (including it’s architecture, parameters, and parameters’ location in the memory), and can precisely flip any bit in the memory. Besides, we also assume that attackers can have access to a small portion of benign samples, but they can not tamper the training process and the training data.
Attacker’s Goals. Attackers have two main goals, including the effectiveness and the stealthiness. Specifically, effectiveness requires that the attacked model can misclassify a specific sample to a predefined target class without any sample modification, and the stealthiness requires that the prediction accuracy of other samples will not be significantly reduced.
3.2 THE PROPOSED METHOD
Loss for Ensuring Effectiveness. Recall that our first target is to force a specific image to be classified as the target class by modifying the model parameters at the bit-level. To this end, the most straightforward way is maximizing the logit of the target class while minimizing that of the source class. For a sample x, the logit of a class can be directly determined by the input of the last layer g(x; Θ) and weights connected to the node of that class. Accordingly, we can modify weights only connected to the source and target class to fulfill our purpose, as follows:
L1(x; Θ,B, B̂s, B̂t) = max ( m− p(x; Θ, B̂t) + δ, 0 ) + max ( p(x; Θ, B̂s)−m+ δ, 0 ) , (2)
where p(x; Θ, B̂i) = [h(B̂i,1);h(B̂i,2); ...;h(B̂i,C)]>g(x; Θ) denotes the logit of class i (i = s or i = t), h(·) is the function defined in Eq. (1), m = max
i∈{0,...,K}\{s} p(x; Θ,Bi), and δ ∈ R
indicates a slack variable, which will be specified in later experiments. The first term of L1 aims at increasing the logit of the target class, while the second term is to decrease the logit of the source class. The loss L1 is 0 only when the output on target class is more than m + δ and the output on source class is less than m − δ. That is, the prediction on x of the target model is the predefined target class. Note that B̂s, B̂t ∈ {0, 1}C×Q are two variables we want to optimize, corresponding to the weights of the fully-connected layer w.r.t. class s and t, respectively, in the target DNN model. B ∈ {0, 1}K×C×Q denotes the weights of the fully-connected layer of the original DNN model, and it is a constant tensor in L1. For clarity, hereafter we simplify L1(x; Θ,B, B̂s, B̂t) as L1(B̂s, B̂t), since x and Θ are also provided input and weights.
Loss for Ensuring Stealthiness. As we mentioned in Section 3.1, we assume that the attacker can get access to an auxiliary sample set {(xi, yi)}Ni=1. Accordingly, the stealthiness of the attack can be formulated as follows:
L2(B̂s, B̂t) = N∑ i=1 `(f(xi; Θ,B{1,...,K}\{s,t}, B̂s, B̂t), yi), (3)
where B{1,...,K}\{s,t} denotes {B1,B2, ...,BK}\{Bs,Bt}, and fj(xi; Θ,B{1,...,K}\{s,t}, B̂s, B̂t) indicates the posterior probability of xi w.r.t. class j, caclulated by Softmax(p(xi; Θ, B̂j)) or Softmax(p(xi; Θ,Bj)). `(·, ·) is specified by the cross entropy loss. To keep clarity, xi, Θ and B{1,...,K}\{s,t} are omitted in L2(B̂s, B̂t) . Besides, to better meet our goal, a straightforward additional approach is reducing the magnitude of the modification. In this paper, we constrain the number of bit-flips less than k. Physical bit flipping techniques can be time-consuming as discussed in (Van Der Veen et al., 2016; Zhao et al., 2019). Moreover, such techniques lead to abnormal behaviors in the attacked system (e.g., suspicious cache activity of processes), which may be detected by some physical detection-based defenses (Gruss et al., 2018). As such, minimizing the number of bit-flips is critical to make the attack more efficient and practical.
Overall Objective. In conclusion, the final objective function is as follows:
min B̂s,B̂t
L1(B̂s, B̂t) + λL2(B̂s, B̂t),
s.t. B̂s ∈ {0, 1}C×Q, B̂t ∈ {0, 1}C×Q, dH(Bs, B̂s) + dH(Bt, B̂t) ≤ k, (4)
where dH(·, ·) denotes the Hamming distance and λ > 0 is a trade-off parameter. For the sake of brevity, Bs and Bt are concatenated and further reshaped to the vector b ∈ {0, 1}2CQ. Similarly, B̂s and B̂t are concatenated and further reshaped to the vector b̂ ∈ {0, 1}2CQ. Besides, for binary vector b and b̂, there exists a nice relationship between Hamming distance and Euclidean distance: dH(b, b̂) = ||b− b̂||22. The new formulation of the objective is as follows:
min b̂
L1(b̂) + λL2(b̂), s.t. b̂ ∈ {0, 1}2CQ, ||b− b̂||22 − k ≤ 0. (5)
Problem (5) is denoted as TA-LBF (targeted attack with limited bit-flips). Note that TA-LBF is a binary integer programming (BIP) problem, whose optimization is challenging. We will introduce an effective and efficient method to solve it in the following section.
3.3 AN EFFECTIVE OPTIMIZATION METHOD FOR TA-LBF
To solve the challenging BIP problem (5), we adopt the generic solver for integer programming, dubbed `p-Box ADMM (Wu & Ghanem, 2018). The solver presents its superior performance in many tasks, e.g., model pruning (Li et al., 2019), clustering (Bibi et al., 2019), MAP inference (Wu et al., 2020a), adversarial attack (Fan et al., 2020), etc.. It proposed to replace the binary constraint equivalently by the intersection of two continuous constraints, as follows
b̂ ∈ {0, 1}2CQ ⇔ b̂ ∈ (Sb ∩ Sp), (6)
where Sb = [0, 1]2CQ indicates the box constraint, and Sp = {b̂ : ||b̂ − 12 || 2 2 = 2CQ 4 } denotes the `2-sphere constraint. Utilizing (6), Problem (5) is equivalently reformulated as
min b̂,u1∈Sb,u2∈Sp,u3∈R+
L1(b̂) + λL2(b̂), s.t. b̂ = u1, b̂ = u2, ||b− b̂||22 − k + u3 = 0, (7)
where two extra variables u1 and u2 are introduced to split the constraintsw.r.t. b̂. Besides, the nonnegative slack variable u3 ∈ R+ is used to transform ||b−b̂||22−k ≤ 0 in (5) into ||b−b̂||22−k+u3 = 0. The above constrained optimization problem can be efficiently solved by the alternating direction method of multipliers (ADMM) (Boyd et al., 2011).
Following the standard procedure of ADMM, we firstly present the augmented Lagrangian function of the above problem, as follows:
L(b̂,u1,u2, u3, z1, z2, z3) =L1(b̂) + λL2(b̂) + z>1 (b̂− u1) + z>2 (b̂− u2) +z3(||b− b̂||22 − k + u3) + c1(u1) + c2(u2) + c3(u3)
+ ρ1 2 ||b̂− u1||22 + ρ2 2 ||b̂− u2||22 + ρ3 2
(||b− b̂||22 − k + u3)2, (8)
where z1, z2 ∈ R2CQ and z3 ∈ R are dual variables, and ρ1, ρ2, ρ3 > 0 are penalty factors, which will be specified later. c1(u1) = I{u1∈Sb}, c2(u2) = I{u2∈Sp}, and c3(u3) = I{u3∈R+} capture the constraints Sb,Sp and R+, respectively. The indicator function I{a} = 0 if a is true; otherwise, I{a} = +∞. Based on the augmented Lagrangian function, the primary and dual variables are updated iteratively, with r indicating the iteration index.
Given (b̂r, zr1 , zr2 , zr3), update (u r+1 1 ,u r+1 2 , u r+1 3 ). Given (b̂r, zr1 , zr2 , zr3), (u1,u2, u3) are independent, and they can be optimized in parallel, as follows ur+11 = arg min u1∈Sb (zr1) >(b̂r − u1) + ρ12 ||b̂ r − u1||22 = PSb(b̂r + zr1 ρ1 ), ur+12 = arg min u2∈Sp (zr2) >(b̂r − u2) + ρ22 ||b̂ r − u2||22 = PSp(b̂r + zr2 ρ2 ), ur+13 = arg min u3∈R+ zr3(||b− b̂r||22 − k + u3) + ρ3 2 (||b− b̂ r||22 − k + u3)2
= PR+(−||b− b̂r||22 + k − zr3 ρ3 ),
(9)
where PSb(a) = min((1,max(0,a)) with a ∈ Rn is the projection onto the box constraint Sb; PSp(a) = √ n 2 ā ||a|| + 1 2 with ā = a − 1 2 indicates the projection onto the `2-sphere constraint Sp (Wu & Ghanem, 2018); PR+(a)=max(0, a) with a∈R indicates the projection onto R+.
Given (ur+11 ,u r+1 2 , u r+1 3 , z r 1 , z r 2 , z r 3), update b̂r+1. Although there is no closed-form solution to b̂r+1, it can be easily updated by the gradient descent method, as both L1(b̂) and L2(b̂) are differentiable w.r.t. b̂, as follows
b̂r+1 ← b̂r − η · ∂L(b̂,u r+1 1 ,u r+1 2 , u r+1 3 , z r 1 , z r 2 , z r 3)
∂b̂
∣∣∣ b̂=b̂r , (10)
where η > 0 denotes the step size. Note that we can run multiple steps of gradient descent in the above update. Both the number of steps and η will be specified in later experiments. Besides, due to the space limit, the detailed derivation of ∂L/∂b̂ will be presented in Appendix A.
Given (b̂r+1,ur+11 ,u r+1 2 , u r+1 3 ), update (z r+1 1 , z r+1 2 , z r+1 3 ). The dual variables are updated by the gradient ascent method, as follows zr+11 = z r 1 + ρ1(b̂ r+1 − ur+11 ), zr+12 = z r 2 + ρ2(b̂
r+1 − ur+12 ), zr+13 = z r 3 + ρ3(||b− b̂r+1||22 − k + ur+13 ).
(11)
Remarks. 1) Note that since (ur+11 ,u r+1 2 , u r+1 3 ) are updated in parallel, their updates belong to the same block. Thus, the above algorithm is a two-block ADMM algorithm. We provide the algorithm outline in Appendix B. 2) Except for the update of b̂r+1, all other updates are very simple and efficient. The computational cost of the whole algorithm will be analyzed in Appendix C. 3) Due to the inexact solution to b̂r+1 using gradient descent, the theoretical convergence of the whole ADMM algorithm cannot be guaranteed. However, as demonstrated in many previous works (Gol’shtein & Tret’yakov, 1979; Eckstein & Bertsekas, 1992; Boyd et al., 2011), the inexact two-block ADMM often shows good practical convergence, which is also the case in our later experiments. Besides, the numerical convergence analysis is presented in Appendix D. 4) The proper adjustment of (ρ1, ρ2, ρ3) could accelerate the practical convergence, which will be specified later .
4 EXPERIMENTS
4.1 EVALUATION SETUP
Settings. We compare our method (TA-LBF) with GDA (Liu et al., 2017a), FSA (Zhao et al., 2019), T-BFA (Rakin et al., 2020b), and TBT (Rakin et al., 2020a). All those methods can be adopted to misclassify a specific image into a target class. We also take the fine-tuning (FT) of the last fully-connected layer as a baseline method. We conduct experiments on CIFAR-10 (Krizhevsky et al., 2009) and ImageNet (Russakovsky et al., 2015). We randomly select 1,000 images from each dataset as the evaluation set for all methods. Specifically, for each of the 10 classes in CIFAR-10, we perform attacks on the 100 randomly selected validation images from the other 9 classes. For ImageNet, we randomly choose 50 target classes. For each target class, we perform attacks on 20 images randomly selected from the rest classes in the validation set. Besides, for all methods except GDA which does not employ auxiliary samples, we provide 128 and 512 auxiliary samples on CIFAR-10 and ImageNet, respectively. Following the setting in (Rakin et al., 2020a;b), we adopt the quantized ResNet (He et al., 2016) and VGG (Simonyan & Zisserman, 2015) as the target models. For our TA-LBF, the trade-off parameter λ and the constraint parameter k affect the attack stealthiness and the attack success rate. We adopt a strategy for jointly searching λ and k, which is specified in Appendix E.3. More descriptions of our settings are provided in Appendix E.
Evaluation Metrics. We adopt three metrics to evaluate the attack performance, i.e., the post attack accuracy (PA-ACC), the attack success rate (ASR), and the number of bit-flips (Nflip). PA-ACC denotes the post attack accuracy on the validation set except for the specific attacked sample and the auxiliary samples. ASR is defined as the ratio of attacked samples that are successfully attacked into the target class among all 1,000 attacked samples. Nflip is the number of bit-flips required for an attack. A better attack performance corresponds to a higher PA-ACC and ASR, while a lower Nflip. Besides, we also show the accuracy of the original model, denoted as ACC.
4.2 MAIN RESULTS
Results on CIFAR-10. The results of all methods on CIFAR-10 are shown in Table 1. Our method achieves a 100% ASR with the fewest Nflip for all the bit-widths and architectures. FT modifies the maximum number of bits among all methods since there is no limitation of parameter modifications. Due to the absence of the training data, the PA-ACC of FT is also poor. These results indicate that fine-tuning the trained DNN as an attack method is infeasible. Although T-BFA flips the secondfewest bits under three cases, it fails to achieve a higher ASR than GDA and FSA. In terms of PA-ACC, TA-LBF is comparable to other methods. Note that the PA-ACC of TA-LBF significantly outperforms that of GDA, which is the most competitive w.r.t. ASR and Nflip among all the baseline methods. The PA-ACC of GDA is relatively poor, because it does not employ auxiliary samples. Achieving the highest ASR, the lowest Nflip, and the comparable PA-ACC demonstrates that our optimization-based method is more superior than other heuristic methods (TBT, T-BFA and GDA).
Results on ImageNet. The results on ImageNet are shown in Table 1. It can be observed that GDA shows very competitive performance compared to other methods. However, our method obtains the highest PA-ACC, the fewest bit-flips (less than 8), and a 100% ASR in attacking ResNet. For VGG, our method also achieves a 100% ASR with the fewest Nflip for both bit-widths. The Nflip results of our method are mainly attributed to the cardinality constraint on the number of bit-flips. Moreover, for our method, the average PA-ACC degradation over four cases on ImageNet is only 0.06%, which demonstrates the stealthiness of our attack. When comparing the results of ResNet and VGG, an interesting observation is that all methods require significantly more bit-flips for VGG. One reason is that VGG is much wider than ResNet. Similar to the claim in (He et al., 2020), increasing the network width contributes to the robustness against the bit-flip based attack.
4.3 RESISTANCE TO DEFENSE METHODS
Resistance to Piece-wise Clustering. He et al. (2020) proposed a novel training technique, called piece-wise clustering, to enhance the network robustness against the bit-flip based attack. Such a training technique introduces an additional weight penalty to the inference loss, which has the effect of eliminating close-to-zero weights (He et al., 2020). We test the resistance of all attack methods to the piece-wise clustering. We conduct experiments with the 8-bit quantized ResNet on CIFAR-10 and ImageNet. Following the ideal configuration in (He et al., 2020), the clustering coefficient, which is a hyper-parameter of piece-wise clustering, is set to 0.001 in our evaluation. For our method, the initial k is set to 50 on ImageNet and the rest settings are the same as those in Section 4.1. Besides the three metrics in Section 4.1, we also present the number of increased Nflip compared to the model without defense (i.e., results in Table 1), denoted as ∆Nflip.
The results of the resistance to the piece-wise clustering of all attack methods are shown in Table 2. It shows that the model trained with piece-wise clustering can improve the number of required bit-flips for all attack methods. However, our method still achieves a 100% ASR with the least number of bit-flips on both two datasets. Although TBT achieves a smaller ∆Nflip than ours on CIFAR-10, its ASR is only 52.3%, which also verifies the defense effectiveness of the piece-wise clustering. Compared with other methods, TA-LBF achieves the fewest ∆Nflip on ImageNet and the best PA-ACC on both datasets. These results demonstrate the superiority of our method over other methods when attacking models trained with piece-wise clustering.
Resistance to Larger Model Capacity. Previous studies (He et al., 2020; Rakin et al., 2020b) observed that increasing the network capacity can improve the robustness against the bit-flip based attack. Accordingly, we evaluate all attack methods against the models with a larger capacity using the 8-bit quantized ResNet on both datasets. Similar to the strategy in (He et al., 2020), we increase the model capacity by varying the network width (i.e., 2× width in our experiments). All settings of our method are the same as those used in Section 4.1.
The results are presented in Table 2. We observe that all methods require more bit-flips to attack the model with the 2× width. To some extent, it demonstrates that the wider network with the same architecture is more robust against the bit-flip based attack. However, our method still achieves a 100% ASR with the fewest Nflip and ∆Nflip. Moreover, when comparing the two defense methods, we find that piece-wise clustering performs better than the model with a larger capacity in terms of ∆Nflip. However, piece-wise clustering training also causes the accuracy decrease of the original model (e.g., from 92.16% to 91.01% on CIFAR-10). We provide more results in attacking models with defense under different settings in Appendix F.
4.4 ABLATION STUDY
We perform ablation studies on parameters λ and k, and the number of auxiliary samplesN . We use the 8-bit quantized ResNet on CIFAR-10 as the representative for analysis. We discuss the attack performance of TA-LBF under different values of λ while k is fixed at 20, and under different values of k while λ is fixed at 10. To analyze the effect ofN , we configureN from 25 to 800 and keep other settings the same as those in Section 4.1. The results are presented in Fig. 2. We observe that our method achieves a 100% ASR when λ is less than 20. As expected, the PA-ACC increases while the ASR decreases along with the increase of λ. The plot of parameter k presents that k can exactly limit the number of bit-flips, while other attack methods do not involve such constraint. This advantage is critical since it allows the attacker to identify limited bits to perform an attack when the budget is fixed. As shown in the figure, the number of auxiliary samples less than 200 has a marked positive impact on the PA-ACC. It’s intuitive that more auxiliary samples can lead to a better PA-ACC. The observation also indicates that TA-LBF still works well without too many auxiliary samples.
4.5 VISUALIZATION OF DECISION BOUNDARY
To further compare FSA and GDA with our method, we visualize the decision boundaries of the original and the post attack models in Fig. 3. We adopt a four-layer Multi-Layer Perceptron trained with the simulated 2-D Blob dataset from 4 classes. The original decision boundary indicates that the original model classifies all data points almost perfectly. The attacked sample is classified into Class 3 by all methods. Visually, GDA modifies the decision boundary drastically, especially for Class 0. However, our method modifies the decision boundary mainly around the attacked sample. Althoug FSA is comparable to ours visually in Fig. 3, it flips 10× bits than GDA and TA-LBF. In terms of the numerical results, TA-LBF achieves the best PA-ACC and the fewest Nflip. This finding verifies that our method can achieve a successful attack even only tweaking the original classifier.
5 CONCLUSION
In this work, we have presented a novel attack paradigm that the weights of a deployed DNN can be slightly changed via bit flipping in the memory, to give a target prediction for a specific sample, while the predictions on other samples are not significantly influenced. Since the weights are stored as binary bits in the memory, we formulate this attack as a binary integer programming (BIP) problem, which can be effectively and efficiently solved by a continuous algorithm. Since the critical bits are determined through optimization, the proposed method can achieve the attack goals by flipping a few bits, and it shows very good performance under different experimental settings.
ACKNOWLEDGMENTS
This work is supported in part by the National Key Research and Development Program of China under Grant 2018YFB1800204, the National Natural Science Foundation of China under Grant 61771273, the R&D Program of Shenzhen under Grant JCYJ20180508152204044. Baoyuan Wu is supported by the Natural Science Foundation of China under grant No. 62076213, and the university development fund of the Chinese University of Hong Kong, Shenzhen under grant No. 01001810.
B ALGORITHM OUTLINE
Algorithm 1 Continuous optimization for the BIP problem (5). Input: The original quantized DNN model f with weights Θ,B, attacked sample x with groundtruth label s, target class t, auxiliary sample set {(xi, yi)}Ni=1, hyper-parameters λ, k, and δ. Output: b̂.
1: Initial u01, u 0 2, u 0 3, z 0 1 , z 0 2 , z 0 3 , b̂ 0 and let r ← 0; 2: while not converged do 3: Update ur+11 , u r+1 2 and u r+1 3 as Eq. (9); 4: Update b̂r+1 as Eq. (10); 5: Update zr+11 , z r+1 2 and z r+1 3 as Eq. (11); 6: r ← r + 1. 7: end while
C COMPLEXITY ANALYSIS
The computational complexity of the proposed algorithm (i.e., Algorithm 1) consists of two parts, the forward and backward pass. In terms of the forward pass, since Θ,B{1,...,K}\{s,t} are fixed during the optimization, their involved terms, including g(x; Θ) and p(x; Θ,Bi)|i 6=s,t, are calculated only one time. The main cost from B̂s and B̂t is O(2(N + 1)C2Q) per iteration, as there are N + 1 samples. In terms of the backward pass, the main cost is from the update of b̂r+1, which is O(2(N + 1)CQ) per iteration in the gradient descent. Since all other updates are very simple, their costs are omitted here. Thus, the overall computational cost is O ( Touter[2(N + 1)CQ · (C + Tinner)] ) , with Touter being the iteration of the overall algorithm and Tinner indicating the number of gradient steps in updating b̂r+1. As shown in Section D, the proposed method TA-LBF always converges very fast in our experiments, thus Touter is not very large. As demonstrated in Section E.3, Tinner is set to 5 in our experiments. In short, the proposed method can be optimized very efficiently.
Besides, we also compare the computational complexity of different attacks empirically. Specifically, we compare the running time of attacking one image of different methods against the 8-bit quantized ResNet on CIFAR-10 and ImageNet dataset. As shown in Table 3, TBT is the most timeconsuming method among all attacks. Although the proposed TA-LBF is not superior to T-BFA, FSA, and GDA in running time, this gap can be tolerated when attacking a single image in the deployment stage. Besides, our method performs better in terms of PA-ACC, ASR, and Nflip as demonstrated in our experiments.
D NUMERICAL CONVERGENCE ANALYSIS
We present the numerical convergence of TA-LBF in Fig. 4. Note that ||b̂ − u1||22 and ||b̂ − u2||22 characterize the degree of satisfaction of the box and `2-sphere constraint, respectively. For the two examples of CIFAR-10 and ImageNet, the values of both indicators first increase, then drop, and finally close to 0. Another interesting observation is that L1 + λL2 first decreases evidently and then increases slightly. Such findings illustrate the optimization process of TA-LBF. In the early iterations, modifying the model parameters tends to achieve the two goals mentioned in Section 3.1; in the late iterations, b̂ is encouraged to satisfy the box and l2-sphere constraint. We also observe that both examples stop when meeting ||b̂ − u1||22 ≤ 10−4 and ||b̂ − u2||22 ≤ 10−4 and do not
exceed the maximum number of iterations (i.e., 2000). The numerical results demonstrate the fast convergence of our method in practice.
E EVALUATION SETUP
E.1 BASELINE METHODS
Since GDA (Liu et al., 2017a) and FSA (Zhao et al., 2019) are originally designed for attacking the full-precision network, we adapt these two methods to attack the quantized network by applying quantization-aware training (Jacob et al., 2018). We adopt the `0-norm for FSA (Liu et al., 2017a) and modification compression for GDA (Zhao et al., 2019) to reduce the number of the modified parameters. Among three types of T-BFA (Rakin et al., 2020b), we compare to the most comparable method: the 1-to-1 stealthy attack scheme. The purpose of this attack scheme is to misclassify samples of a single source class into the target class while maintaining the prediction accuracy of other samples. Besides, we take the fine-tuning (FT) of the last fully-connected layer as a basic attack and present its results. We perform attack once for each selected image except TBT (Rakin et al., 2020a) and totally 1,000 attacks on each dataset. The attack objective of TBT is that the attacked DNN model misclassifies all inputs with a trigger to a certain target class. Due to such objective, the number of attacks for TBT is equal to the number of target classes (i.e., 10 attacks on CIFAR-10 and 50 attacks on ImageNet).
E.2 TARGET MODELS
According to the setting in (Rakin et al., 2020a;b), we adopt two popular network architectures: ResNet (He et al., 2016) and VGG (Simonyan & Zisserman, 2015) for evaluation. On CIFAR-10, we perform experiments on ResNet-20 and VGG-16. On ImageNet, we use the pre-trained ResNet18* and VGG-16† network. We quantize all networks to the 4-bit and 8-bit quantization level using the layer-wise uniform weight quantization scheme, which is similar to the one involved in the Tensor-RT solution (Migacz, 2017).
E.3 PARAMETER SETTINGS OF TA-LBF
For each attack, we adopt a strategy for jointly searching λ and k. Specifically, for an initially given k, we search λ from a relatively large initial value and divide it by 2 if the attack does not succeed. The maximum search times of λ for a fixed k is set to 8. If it exceeds the maximum search times,
*Downloaded from https://download.pytorch.org/models/resnet18-5c106cde.pth †Downloaded from https://download.pytorch.org/models/vgg16_bn-6c64b313.pth
we double k and search λ from the relatively large initial value. The maximum search times of k is set to 4. On CIFAR-10, the initial k and λ are set to 5 and 100. On ImageNet, λ is initialized as 104; k is initialized as 5 and 50 for ResNet and VGG, respectively. On CIFAR-10, the δ in L1 is set to 10. On ImageNet, the δ is set to 3 and increased to 10 if the attack fails. u1 and u2 are initialized as b and u3 is initialized as 0. z1 and z2 are initialized as 0 and z3 is initialized as 0. b̂ is initialized as b. During each iteration, the number of gradient steps for updating b̂ is 5 and the step size is set to 0.01 on both datasets. Hyper-parameters (ρ1, ρ2, ρ3) (see Eq. (11)) are initialized as (10−4, 10−4, 10−5) on both datasets, and increase by ρi ← ρi×1.01, i = 1, 2, 3 after each iteration. The maximum values of (ρ1, ρ2, ρ3) are set to (50, 50, 5) on both datasets. Besides the maximum number of iterations (i.e., 2000), we also set another stopping criterion, i.e., ||b̂−u1||22 ≤ 10−4 and ||b̂− u2||22 ≤ 10−4.
F MORE RESULTS ON RESISTANCE TO DEFENSE METHODS
F.1 RESISTANCE TO PIECE-WISE CLUSTERING
We conduct experiments using the 8-bit quantized ResNet on CIFAR-10 with different clustering coefficients. We set the maximum search times of k to 5 for clustering coefficient 0.005 and 0.01 and keep the rest settings the same as those in Section 4.1. The results are presented in Table 4. As shown in the table, all values of Nflip are larger than attacking models without defense for all methods, which is similar to Table 2. Our method achieves a 100% ASR with the fewest Nflip under the three clustering coefficients. Although TBT obtains a smaller ∆Nflip than our method, it fails to achieve a satisfactory ASR. For example, TBT achieves only a 10.1% ASR when the clustering coefficient is set to 0.01. We observe that for all clustering coefficients, piece-wise clustering reduces the original accuracy. Such a phenomenon is more significant as the clustering coefficient increases. The results also show that there is no guarantee that if the clustering coefficient is larger (e.g., 0.01), the model is more robust, which is consistent with the finding in (He et al., 2020).
F.2 RESISTANCE TO LARGER MODEL CAPACITY
Besides the results of networks with a 2× width shown in Section 4.3, we also evaluate all methods against models with a 3× and 4× width. All settings are the same as those used in Section 4.1. The results are provided in Table 5. Among all attack methods, our method is least affected by increasing the network width. Especially for the network with a 4× width, our ∆Nflip is only 2.80. The results demonstrate the superiority of the formulated BIP problem and optimization. Moreover, compared with piece-wise clustering, having a larger model capacity can improve the original accuracy, but increases the model size and the computation complexity.
G DISCUSSIONS
G.1 COMPARING BACKDOOR, ADVERSARIAL, AND WEIGHT ATTACK
An attacker can achieve malicious purposes utilizing backdoor, adversarial, and weight attacks. In this section, we emphasize the differences among them.
Backdoor attack happens in the training stage and requires that the attacker can tamper the training data even the training process (Liu et al., 2020b; Li et al., 2020). Through poisoning some training samples with a trigger, the attacker can control the behavior of the attacked DNN in the inference stage. For example, images with reflections are misclassified into a target class, while benign images are classified normally (Liu et al., 2020a). However, such an attack paradigm causes the accuracy degradation on benign samples, which makes it detectable for users. Besides, these methods also require to modify samples in the inference stage, which is sometimes impossible for the attacker. Many defense methods against backdoor attack have been proposed, such as the preprocessingbased defense (Liu et al., 2017b), the model reconstruction-based defense (Liu et al., 2018a), and the trigger synthesis-based defense (Wang et al., 2019).
Adversarial attack modifies samples in the inference stage by adding small perturbations that remain imperceptible to the human vision system (Akhtar & Mian, 2018). Since adversarial attack only modifies inputs while keeping the model unchanged, it has no effect on the benign samples. Besides the basic white-box attack, the black-box attack (Wu et al., 2020b; Chen et al., 2020) and universal attack (Zhang et al., 2020b;a) have attracted wide attention. Inspired by its success in the classification, it also has been extended to other tasks, including image captioning (Xu et al., 2019), retrieval (Bai et al., 2020; Feng et al., 2020), etc.. Similarly, recent studies have demonstrated many defense methods against adversarial attack, including the preprocessing-based defense (Xie et al., 2018), the detection-based defense (Xu et al., 2017), and the adversarial learning-based defense (Carmon et al., 2019; Wu et al., 2020c).
Weight attack modifies model parameters in the deployment stage, which is the studied paradigm in this work. Weight attack generally aims at misleading the DNN model on the selected sample(s), while having a minor effect on other samples (Zhao et al., 2019; Rakin et al., 2020b). Many studies (Yao et al., 2020; Breier et al., 2018; Pan, 2020) have demonstrated that the DNN parameters can be modified in the bit-level in memory using fault injection techniques (Agoyan et al., 2010; Kim et al., 2014; Selmke et al., 2015) in practice. Note that the defense methods against weight attack have been not well studied. Although some defense methods (He et al., 2020) were proposed, they cannot achieve satisfactory performance. For example, our method can still achieve a 100% attack success rate against two proposed defense methods. Our work would encourage further investigation on the security of the model parameters from both attack and defense sides.
G.2 COMPARING TA-LBF WITH OTHER WEIGHT ATTACKS
We compare our TA-LBF with other weight attack methods, including TBT (Rakin et al., 2020a), TBFA (Rakin et al., 2020b), GDA (Liu et al., 2017a), and FSA (Zhao et al., 2019) in this section. TBT tampers both the test sample and the model parameters. Specifically, it first locates critical bits and generates a trigger, and then flips these bits to classify all inputs embedded with the trigger to a target class. However, the malicious samples are easily detected by human inspection or many detection methods (Tran et al., 2018; Du et al., 2020). We do not modify the samples to perform TA-LBF, which makes the attack more stealthy. Rakin et al. (2020b) proposed T-BFA which misclassifies all samples (N-to-1 version) or samples from a source class (1-to-1 version) into a target class. Our method aims at misclassifying a specific sample, which meets the attacker’s requirement in some scenarios. For example, the attacker wants to manipulate the behavior of a face recognition engine on a specific input. Since it affects multiple samples, T-BFA maybe not stealthy enough in attacking real-world applications. GDA (Liu et al., 2017a) and FSA (Zhao et al., 2019) modify model parameters at the weight-level rather than bit-level. They are designed for misclassifying multiple samples from arbitrary classes, which makes it infeasible for them to only modify the parameters connected to the source and target class. They modify more parameters than our method as shown in the experiments, it might be due to the reason discussed above. Besides, TBT, T-BFA, and GDA determine the critical weights to modify using heuristic strategies, while our TA-LBF adopts optimization-based methods. Although FSA applies ADMM for solving the optimization problem, it has no explicit constraint to control the number of modified parameters, which makes it intends to modify more parameters than GDA and our TA-LBF.
H TRADE-OFF BETWEEN THREE EVALUATION METRICS
In this section, we investigate the trade-off between three adopted evaluation metrics (i.e., PA-ACC, ASR, and Nflip) for our attack. All experiments are conducted on CIFAR-10 and ImageNet dataset in attacking the 8-bit quantized ResNet.
We firstly discuss the trade-off between PA-ACC and Nflip by fixing the ASR as 100% using the search strategy in Appendix E.3 and adjusting the initial λ and k to obtain different attack results. The two curves on the left show that increasing the Nflip can improve the PA-ACC when Nflip is relatively small; the PA-ACC decreases with the increase of Nflip when Nflip is greater than a threshold. This phenomenon demonstrates that constraining the number of bit-flips is essential to ensure the attack stealthiness, as mentioned in Section 3.2. To study the trade-off between PA-ACC
and ASR, we fix the parameter k as 10 for approximately 10 bit-flips and adjust the parameter λ to obtain different PA-ACC and ASR results. The trade-off curves between PA-ACC and ASR show that increasing ASR can decrease the PA-ACC significantly. Therefore, how to achieve high ASR and PA-ACC simultaneously is still an important open problem. | 1. What is the focus of the paper regarding parameter weight alteration?
2. What are the concerns about the proposed method's motivation and comparison with prior works?
3. How does the reviewer assess the significance of the number of flipped bits in the attack?
4. Why does the reviewer suggest comparing the proposed approach with other works that map all images from a certain class to the attack target class?
5. What are the limitations of the evaluated defense strategies? | Review | Review
The paper proposes a bit-flip attack where model parameters weights are altered such that a certain sample is misclassified to a target class. While the utilized optimization strategy and the combination of techniques seems interesting, a major concern is the motivation behind the proposed method and why it stands out against prior works.
Comparisons in the experiment section show that prior methods are performing on par with the proposed method in terms of the attack success rate and benign accuracy. The major difference seems to be the number of flipped bits. However, the paper in its current form does not state "why" the number of flipped bits seems to matter.
The method is also performing on par with prior works in terms of the resiliency against defense mechanisms. Therefore, it is not really clear why this work is preferred over prior methods, i.e., does reducing the number of flipped bits matter in the attack?
In addition to motivating the number of flipped bits, the authors need to also clarify why the current "single image" attack is preferred over other works where all images from a certain class are mapped to the attack target class. It seems like the proposed approach is in fact a special case of the latter scenario which is studied in prior works.
The evaluated defense strategies are all passive, i.e., they are performed before the attack and therefore are not aware of the attack strategy. For a comprehensive examination, the authors should also compare with the defense in [1] which applied the defense at the inference phase, assuming a bit-flip attack may have occurred on the model parameters. |
ICLR | Title
Graph Signal Sampling for Inductive One-Bit Matrix Completion: a Closed-form Solution
Abstract
Inductive one-bit matrix completion is motivated by modern applications such as recommender systems, where new users would appear at test stage with the ratings consisting of only ones and no zeros. We propose a unified graph signal sampling framework which enjoys the benefits of graph signal analysis and processing. The key idea is to transform each user’s ratings on the items to a function (graph signal) on the vertices of an item-item graph, then learn structural graph properties to recover the function from its values on certain vertices — the problem of graph signal sampling. We propose a class of regularization functionals that takes into account discrete random label noise in the graph vertex domain, then develop the GS-IMC approach which biases the reconstruction towards functions that vary little between adjacent vertices for noise reduction. Theoretical result shows that accurate reconstructions can be achieved under mild conditions. For the online setting, we develop a Bayesian extension, i.e., BGS-IMC which considers continuous random Gaussian noise in the graph Fourier domain and builds upon a predictioncorrection update algorithm to obtain the unbiased and minimum-variance reconstruction. Both GS-IMC and BGS-IMC have closed-form solutions and thus are highly scalable in large data as verified on public benchmarks.
1 INTRODUCTION
In domains such as recommender systems and social networks, only “likes” (i.e., ones) are observed in the system and service providers (e.g, Netflix) are interested in discovering potential “likes” for existing users to stimulate demand. This motivates the problem of 1-bit matrix completion (OBMC), of which the goal is to recover missing values in an n-by-m item-user matrix R∈ {0, 1}n×m. We note that Ri,j = 1 means that item i is rated by user j, but Ri,j = 0 is essentially unlabeled or unknown which is a mixture of unobserved positive examples and true negative examples.
However, in real world new users, who are not exposed to the model during training, may appear at testing stage. This fact stimulates the development of inductive 1-bit matrix completion, which aims to recover unseen vector y ∈ {0, 1}n from its partial positive entries Ω+ ⊆ {j|yj = 1} at test time. Fig. 1(a) emphasizes the difference between conventional and inductive approaches. More formally, let M∈{0, 1}n×(m+1) denote the underlying matrix, where only a subset of positive examples Ψ is randomly sampled from {(i, j)|Mi,j=1, i≤n, j≤m} such that Ri,j=1 for (i, j)∈Ψ and Ri,j=0 otherwise. Consider (m+1)-th column y out of matrix R, we likewise denote its observations si=1 for i ∈ Ω+ and si=0 otherwise. We note that the sampling process here assumes that there exists a random label noise ξ which flips a 1 to 0 with probability ρ, or equivalently s = y + ξ where
ξi = −1 for i ∈ {j|yj = 1} − Ω+, and ξi = 0 otherwise. (1) Fig. 1(a) presents an example of s,y, ξ to better understand their relationships.
Fundamentally, the reconstruction of true y from corrupted s bears a resemblance with graph signal sampling. Fig. 1(b) shows that the item-user rating matrix R can be used to define a homogeneous
∗Junchi Yan is the correspondence author who is also with Shanghai AI Laboratory. The work was in part supported by NSFC (62222607), STCSM (22511105100).
item-item graph (see Sec 3.1), such that user ratings y/s on items can be regarded as signals residing on graph nodes. The reconstruction of bandlimited graph signals from certain subsets of vertices (see Sec 2) has been extensively studied in graph signal sampling (Pesenson, 2000; 2008).
Despite popularity in areas such as image processing (Shuman et al., 2013; Pang & Cheung, 2017; Cheung et al., 2018) and matrix completion (Romero et al., 2016; Mao et al., 2018; McNeil et al., 2021), graph signal sampling appears less studied in the specific inductive one bit matrix completion problem focused in this paper (see Appendix A for detailed related works). Probably most closely related to our approach are MRFCF (Steck, 2019) and SGMC (Chen et al., 2021) which formulate their solutions as spectral graph filters. However, we argue that these methods are orthogonal to us since they focus on optimizing the rank minimization problem, whereas we optimize the functional minimization problem, thereby making it more convinient and straightforward to process and analyze the matrix data with vertex-frequency analysis (Hammond et al., 2011; Shuman et al., 2013), time-variant analysis (Mao et al., 2018; McNeil et al., 2021), smoothing and filtering (Kalman, 1960; Khan & Moura, 2008). Furthermore, (Steck, 2019; Chen et al., 2021) can be incorporated as special cases of our unified graph signal sampling framework (see Appendix B for detailed discussions).
Another emerging line of research has focused on learning the mapping from side information (or content features) to latent factors (Jain & Dhillon, 2013; Xu et al., 2013; Ying et al., 2018; Zhong et al., 2019). However, it has been recently shown (Zhang & Chen, 2020; Ledent et al., 2021; Wu et al., 2021) that in general this family of algorithms would possibly suffer inferior expressiveness when high-quality content is not available. Further, collecting personal data is likely to be unlawful as well as a breach of the data minimization principle in GDPR (Voigt & Von dem Bussche, 2017).
Much effort has also been made to leverage the advanced graph neural networks (GNN) for improvements. van den Berg et al. (2017) represent the data matrix R by a bipartite graph then generalize the representations to unseen nodes by summing the embeddings over the neighbors. Zhang & Chen (2020) develop graph neural networks which encode the subgraphs around an edge into latent factors then decode the factors back to the value on the edge. Besides, Wu et al. (2021) consider the problem in a downsampled homogeneous graph (i.e., user-user graph in recommender systems) then exploit attention networks to yield inductive representations. The key advantage of our approach is not only the closed form solution which takes a small fraction of training time required for GNNs, but also theory results that guarantee accurate reconstruction and provide guidance for practical applications.
We emphasize the challenges when connecting ideas and methods of graph signal sampling with inductive 1-bit matrix completion — 1-bit quantization and online learning. Specifically, 1-bit quantization raises challenges for formulating the underlying optimization problems: minimizing squared loss on the observed positive examples Ω+ yields a degenerate solution — the vector with all entries equal to one achieves zero loss; minimizing squared loss on the corrupted data s introduces the systematic error due to the random label noise ξ in Eq. (1). To address the issue, we represent the observed data R as a homogeneous graph, then devise a broader class of regularization functionals on graphs to mitigate the impact of discrete random noise ξ. Existing theory for total variation denoising (Sadhanala et al., 2016; 2017) and graph regularization (Belkin et al., 2004; Huang et al., 2011), which takes into account continuous Gaussian noise, does not sufficiently address recoverability in inductive 1-bit matrix completion (see Sec 3.4). We finally mange to derive a closed-form solution, entitled Graph Sampling for Inductive (1-bit) Matrix Completion GS-IMC which biases the reconstruction towards functions that vary little between adjacent vertices for noise reduction.
For online learning, existing matrix factorization methods (Devooght et al., 2015; Volkovs & Yu, 2015; He et al., 2016) incrementally update model parameters via gradient descent, requiring an expensive line search to set the best learning rate. To scale up to large data, we develop a Bayesian extension called BGS-IMC where a prediction-correction algorithm is devised to instantly refreshes the prediction given new incoming data. The prediction step tracks the evolution of the optimization problem such that the predicted iterate does not drift away from the optimum, while the correction step adjusts for the distance between current prediction and the new information at each step. The advantage over baselines is that BGS-IMC considers the uncertainties in the graph Fourier domain, and the prediction-correction algorithm can efficiently provide the unbiased and minimum-variance predictions in closed form, without using gradient descent techniques. The contributions are:
• New Inductive 1-bit Matrix Completion Framework. We propose and technically manage (for the first time to our best knowledge) to introduce graph signal sampling to inductive 1-bit matrix completion. It opens the possibility of benefiting the analysis and processing of the matrix with signal processing toolbox including vertex-frequency analysis (Hammond et al., 2011; Shuman et al., 2013), time-variant analysis (Mao et al., 2018; McNeil et al., 2021), smoothing and filtering (Kalman, 1960; Khan & Moura, 2008) etc. We believe that our unified framework can serve as a new paradigm for 1-bit matrix completion, especially in large-scale and dynamic systems. • Generalized Closed-form Solution. We derive a novel closed-form solution (i.e., GS-IMC) in the graph signal sampling framework, which incorporates existing closed-form solutions as special cases, e.g., (Chen et al., 2021; Steck, 2019). GS-IMC is learned from only positive data with discrete random noise. This is one of key differences to typical denoising methods (Sadhanala et al., 2016) where efforts are spent on removing continuous Gaussian noise from a real-valued signal. • Robustness Enhancement. We consider the online learning scenario and construct a Bayesian extension, i.e., BGS-IMC where a new prediction-correction algorithm is proposed to instantly yield unbiased and minimum-variance predictions given new incoming data. Experiments in Appendix E show that BGS-IMC is more cost-effective than many neural models such as SASREC (Kang & McAuley, 2018), BERT4REC (Sun et al., 2019) and GREC (Yuan et al., 2020). We believe that this proves a potential for the future application of graph signal sampling to sequential recommendation. • Theoretical Guarantee and Empirical Effectiveness. We extend Paley-Wiener theorem of (Pesenson, 2009) on real-valued data to positive-unlabelled data with statistical noise. The theory shows that under mild conditions, unseen rows and columns in training can be recovered from a certain subset of their values that is present at test time. Empirical results on real-world data show that our methods achieve state-of-the-art performance for the challenging inductive Top-N ranking tasks.
2 PRELIMINARIES
In this section, we introduce the notions and provide the necessary background of graph sampling theory. Let G = (V,E,w) denote a weighted, undirected and connected graph, where V is a set of vertices with |V | = n, E is a set of edges formed by the pairs of vertices and the positive weight w(u, v) on each edge is a function of the similarity between vertices u and v.
Space L2(G) is the Hilbert space of all real-valued functions f : V → R with the following norm: ‖ f ‖= √∑ v∈V |f(v)|2, (2)
and the discrete Laplace operator Ł is defined by the formula (Chung & Graham, 1997):
Łf(v) = 1√ d(v) ∑ u∈N (v) w(u, v) ( f(v)√ d(v) − f(u)√ d(u) ) , f ∈ L2(G)
where N (v) signifies the neighborhood of node v and d(v)= ∑ u∈N (v)w(u, v) is the degree of v.
Definition 1 (Graph Fourier Transform). Given a function or signal f in L2(G), the graph Fourier transform and its inverse (Shuman et al., 2013) can be defined as follows:
f̃G = U >f and f = Uf̃ , (3)
where U represents eigenfunctions of discrete Laplace operator Ł, f̃G denotes the signal in the graph Fourier domain and f̃G(λl)=〈f ,ul〉 signifies the information at the frequency λl1. Definition 2 (Bandlimiteness). f ∈L2(G) is called ω-bandlimited function if its Fourier transform f̃G has support in [0, ω], and ω-bandlimited functions form the Paley-Wiener space PWω(G). Definition 3 (Graph Signal Sampling). Given y ∈ PWω(G), y can be recovered from its values on the vertices Ω+ by minimizing below objective (Pesenson, 2000; 2008), with positive scalar k:
min f∈L2(G)
‖ Łkf ‖ s.t., f(v) = y(v), ∀v ∈ Ω+. (4)
Recall that the observation in inductive 1-bit matrix completion consists of only ones and no zeros (i.e., y(v) = 1 for v ∈ Ω+) and ‖ Łk1 ‖= 0. It is obvious that minimizing the loss on the observed entries corresponding to ones, produces a degenerate solution — the vector with all entries equal to one achieves zero loss. From this point of view, existing theory for sampling real-valued signals (Pesenson, 2000; 2008) is not well suited to the inductive 1-bit matrix completion problem.
3 CLOSED-FORM SOLUTION FOR 1-BIT MATRIX COMPLETION
This section builds a unified graph signal sampling framework for inductive 1-bit matrix completion that can inductively recover y from positive ones on set Ω+. The rational behind our framework is that the rows that have similar observations are likely to have similar reconstructions. This makes a lot of sense in practice, for example a user (column) is likely to give similar items (rows) similar scores in recommender systems. To achieve this, we need to construct a homogeneous graph G where the connected vertices represent the rows which have similar observations, so that we can design a class of graph regularized functionals that encourage adjacent vertices on graph G to have similar reconstructed values. In particular, we mange to provide a closed-form solution to the matrix completion problem (entitled GS-IMC), together with theoretical bounds and insights.
3.1 GRAPH DEFINITION
We begin with the introduction of two different kinds of methods to construct homogeneous graphs by using the zero-one matrix R ∈ Rn×m: (i) following the definition of hypergraphs (Zhou et al., 2007), matrix R can be regarded as the incidence matrix, so as to formulate the hypergraph Laplacian matrix as Ł = I − D−1/2v RD−e R>D −1/2 v where Dv ∈ Rn×n (De ∈ Rm×m) is the diagonal degree matrix of vertices (edges); and (ii) for regular graphs, one of the most popular approaches is to utilize the covariance between rows to form the adjacent matrix Ai,j = Cov(Ri,Rj) for i 6= j so that we can define the graph Laplacian matrix as Ł = I−D−1/2v AD−1/2v .
3.2 GRAPH SIGNAL SAMPLING FRAMEWORK
Given a graph G = (V,E), any real-valued column y ∈ Rn can be viewed as a function on G that maps from V to R, and specifically the i-th vector component yi is equivalent to the function value y(i) at the i-th vertex. Now it is obvious that the problem of inductive matrix completion, of which the goal is to recover column y from its values on entries Ω+, bears a resemblance to the problem of graph signal sampling that aims to recover function y from its values on vertices Ω+.
However, most of existing graph signal sampling methods (Romero et al., 2016; Mao et al., 2018; McNeil et al., 2021) yield degenerated solutions when applying them to the 1-bit matrix completion problem. A popular heuristic is to treat some or all of zeros as negative examples Ω−, then to recover y by optimizing the following functional minimization problem, given any k = 2l, l ∈ N:
min f∈L2(G)
‖ [R(Ł)]kf ‖ s.t., ‖ sΩ − fΩ ‖≤ (5)
1To be consistent with (Shuman et al., 2013), ul (l-th column of matrix U) is the l-th eigenvector associated with the eigenvalue λl, and the graph Laplacian eigenvalues carry a notion of frequency.
where recall that s = y + ξ is the observed data corrupted by discrete random noise ξ, and sΩ (fΩ) signifies the values of s (f ) only on Ω = Ω+ ∪Ω−; R(Ł) = ∑ lR(λl)ulu > l denotes the regularized Laplace operator in which {λl} and {ul} are respectively the eigenvalues and eigenfunctions of operator Ł. It is worth noting that s(i) = y(i) + ξ(i) = 0 for i ∈ Ω− is not the true negative data, and hence Ω− will introduce the systematic bias when there exists i ∈ Ω− so that y(i) = 1. The choice of regularization function R(λ) needs to account for two critical criteria: 1) The resulting regularization operator R(Ł) needs to be semi-positive definite. 2) As mentioned before, we expect the reconstruction ŷ to have similar values on adjacent nodes, so that the uneven functions should be penalized more than even functions. To account for this, we adopt the family of positive, monotonically increasing functions (Smola & Kondor, 2003) as present in Table 1.
To the end, we summarize two natural questions concerning our framework: 1) What are the benefits from introducing the regularized Laplacian penalty? It is obvious that minimizing the discrepancy between sΩ and fΩ does not provide the generalization ability to recover unknown values on the rest vertices V − Ω, and Theorem 4 and 5 answer the question by examining the error bounds. 2) What kind of R(Ł) constitutes a reasonable choice? It has been studied in (Huang et al., 2011) that R(Ł) is most appropriate if it is unbiased, and an unbiased R(Ł) reduces variance without incurring any bias on the estimator. We also highlight the empirical study in Appendix C that evaluates how the performance is affected by the definition of graph G and regularization function R(λ).
3.3 CLOSED-FORM SOLUTION
In what follows, we aim to provide a closed-form solution for our unified framework by treating all of the zeros as negative examples, i.e., s(v) = 1 for v ∈ Ω+ and s(v) = 0 otherwise. Then by using the method of Lagrange multipliers, we reformulate Eq. (5) to the following problem:
min f∈L2(G)
1 2 〈f , R(Ł)f〉+ ϕ 2 ‖s− f‖2 , (6)
where ϕ > 0 is a hyperparameter. Obviously, this problem has a closed-form solution:
ŷ = ( I +R(Ł)/ϕ )− s = (∑ l ( 1 +R(λl)/ϕ ) ulu > l )− s = H(Ł)s, (7)
whereH(Ł) = ∑ lH(λl)ulu > l with kernel 1/H(λl) = 1+R(λ)/ϕ, and we exemplifyH(λ) when ϕ = 1 in Table 1. From the viewpoint of spectral graph theory, our GS-IMC approach is essentially a spectral graph filter that amplifies(attenuates) the contributions of low(high)-frequency functions.
Remark. To understand low-frequency and high-frequency functions, Figure 2 presents case studies in the context of recommender systems on the Netflix prize data (Bennett et al., 2007). Specifically, we divide the vertices (items) into four classes: very-high degree (> 5000), high degree (> 2000), medium degree (> 100) and low degree vertices. Then, we report the recall results of all the four classes in different Paley-Wiener spaces PWλ50(G), . . . ,PWλ1000(G) for top-100 ranking prediction. The interesting observation is: (1) the low-frequency functions with eigenvalues less than λ100 contribute nothing to low degree vertices; and (2) the high-frequency functions whose eigenvalues are greater than λ500 do not help to increase the performance on very-high degree vertices. This finding implies that low(high)-frequency functions reflect the user preferences on the popular(cold) items. From this viewpoint, the model defined in Eq. (7) aims to exploit the items with high clickthrough rate with high certainty, which makes sense in commercial applications.
3.4 ERROR ANALYSIS
Our GS-IMC approach defined in Eq. (7) bears a similarity to total variation denoising (Sadhanala et al., 2016; 2017), graph-constrained regularization (Belkin et al., 2004; 2006), and particularly Laplacian shrinkage methods (Huang et al., 2011). However, we argue that the proposed GS-IMC approach is fundamentally different from previous works. Specifically, they operate on real-valued data while GS-IMC deals with positive-unlabeled data. We believe that our problem setting is more complicated, since the unlabeled data is a mixture of unobserved positive examples and true negative examples. In addition, existing methods analyze the recoverability considering statistical noise to be continuous Gaussian, e.g., Theorem 3 (Sadhanala et al., 2016), Theorem 1.1 (Pesenson, 2009) etc.
However, we study upper bound of GS-IMC in the presence of discrete random label noise ξ. Specifically, Theorem 4 extends Paley-Wiener theorem of (Pesenson, 2009) on real-valued data to positiveunlabelled data, showing that a bandlimited function y can be recovered from its values on certain set Ω. Theorem 5 takes into account statistical noise ξ and shows that a bandlimited function y can be accurately reconstructed if C2n = C > 0 is a constant, not growing with n.
Theorem 4 (Error Analysis, extension of Theorem 1.1 in (Pesenson, 2009)). Given R(λ) with λ ≤ R(λ) on graph G = (V,E), assume that Ωc = V − Ω admits the Poincare inequality ‖ φ ‖≤ Λ ‖ Łφ ‖ for any φ ∈ L2(Ωc) with Λ > 0, then for any y ∈ PWω(G) with 0 < ω ≤ R(ω) < 1/Λ,
‖ y − ŷk ‖≤ 2 ( ΛR(ω) )k ‖ y ‖ and y = lim
k→∞ ŷk (8)
where k is a pre-specified hyperparameter and ŷk is the solution of Eq. (5) with = 0.
Remark. Theorem 4 indicates that a better estimate of y can be achieved by simply using a higher k, but there is a trade-off between accuracy of the estimate on one hand, and complexity and numerical stability on the other. We found by experiments that GS-IMC with k = 1 can achieve SOTA results for inductive top-N recommendation on benchmarks. We provide more discussions in Appendix G. Theorem 5 (Error Analysis, with label noise). Suppose that ξ is the random noise with flip rate ρ, and positive λ1 ≤ · · · ≤ λn are eigenvalues of Laplacian Ł, then for any function y ∈ PWω(G),
E [ MSE(y, ŷ) ] ≤ C 2 n
n ( ρ R(λ1)(1 +R(λ1)/ϕ)2 + 1 4ϕ ) , (9)
where C2n = R(ω) ‖ y ‖2, ϕ is the regularization parameter and ŷ is defined in Eq. (7).
Remark. Theorem 5 shows that for a constantC2n = C > 0 (not growing with n), the reconstruction error converges to zero as n is large enough. Also, the reconstruction error decreases with R(ω) declining which means low-frequency functions can be recovered more easily than high-frequency functions. We provide more discussions on ϕ, ρ in Appendix H.
4 BAYESIAN GS-IMC FOR ONLINE LEARNING
In general, an inductive learning approach such as GAT (Veličković et al., 2017) and SAGE (Hamilton et al., 2017), etc., can naturally cope with the online learning scenario where the prediction is refreshed given a newly observed example. Essentially, GS-IMC is an inductive learning approach that can update the prediction, more effective than previous matrix completion methods (Devooght et al., 2015; He et al., 2016). Let ∆s denote newly coming data that might be one-hot as in Fig. 3(a), ŷ denotes original prediction based on data s, then we can efficiently update ŷ to ŷnew as follows:
ŷnew = H(Ł)(s + ∆s) = ŷ +H(Ł)∆s. (10)
However, we argue that GS-IMC ingests the new data in an unrealistic, suboptimal way. Specifically, it does not take into account the model uncertainties, assuming that the observed positive data is noise-free. This assumption limits model’s fidelity and flexibility for real applications. In addition, it assigns a uniform weight to each sample, assuming that the innovation, i.e., the difference between the current a priori prediction and the current observation information, is equal for all samples.
4.1 PROBLEM FORMULATION
To model the uncertainties, we denote a measurement by z=Uŷ (Fourier basis U) which represents prediction ŷ in the graph Fourier domain and we assume that z is determined by a stochastic process.
In Fig. 3(b), measurement z is governed by hidden state x and noise ν captures the data uncertainties in an implicit manner. The choice of state transition equation need to account for two critical criteria: (1) the model uncertainties need to be considered. (2) the transition from state x to state xnew need to represent the evolution of predictions ŷ/ŷy defined in Eq. (10).
To account for this, we propose a Bayesian extension of GS-IMC, entitled BGS-IMC, which considers the stochastic filtering problem in a dynamic state-space form:
xnew = x + F∆s + η (11) znew = xnew + ν (12)
where Eq. (11) essentially follows Eq. (10) in the graph Fourier domain, i.e., multiplying both sides of Eq. (10) by U. In control theory, F = UH(Ł) is called the input matrix and ∆s represents the system input vector. The state equation (11) describes how the true state x,xnew evolves under the impact of the process noise η ∼ N (0,Ση), and the measurement equation (12) characterizes how a measurement znew = U>(s + ∆s) of the true state xnew is corrupted by the measurement noise ν ∼ N (0,Σν). It is worth noting that larger determinant of Σν means that data points are more dispersed, while for Ση large determinant implies that BGS-IMC is not sufficiently expressive and it is better to use measurement for decision making, i.e., BGS-IMC is reduced to GS-IMC.
Using Bayes rule, the posterior is given by:
p(xnew|∆s, znew) ∝ p(znew|xnew)p(xnew|∆s), (13)
where p(znew|xnew) and p(xnew|∆s) follow a Gauss-Markov process.
4.2 PREDICTION-CORRECTION UPDATE ALGORITHM
To make an accurate prediction, we propose a prediction-correction update algorithm, resembling workhorse Kalman filtering-based approaches (Kalman, 1960; Wiener et al., 1964). To our knowledge, the class of prediction-correction methods appears less studied in the domain of 1-bit matrix completion, despite its popularity in time-series forecasting (Simonetto et al., 2016; de Bézenac et al., 2020) and computer vision (Matthies et al., 1989; Scharstein & Szeliski, 2002).
In the prediction step, we follow the evolution of the state as defined in Eq. (11) to compute the mean and the covariance of conditional p(xnew|∆s):
E[xnew|∆s] = x̂ + F∆s = x̄new and Var(xnew|∆s) = P + Ση = P̄new, (14)
where x̂ is the estimate state of x and P is the estimate covariance, i.e., P= E(x − x̂)(x − x̂)>, while x̄new, P̄new are the extrapolated estimate state and covariance respectively. Meanwhile, it is easy to obtain the mean and the covariance of conditional p(znew|xnew):
E[znew|xnew] = E[xnew + ν] = xnew and Var(znew|xnew) = E[νν>] = Σν . (15)
In the correction step, we combine Eq. (13) with Eq. (14) and (15): p(xnew|∆s, znew)∝exp ( (xnew− znew)>Σ−ν (xnew− znew) + (xnew− x̄new)>P̄−new(xnew− x̄new) ) .
By solving ∂ ln p(xnew|∆s, znew)/∂xnew = 0, we have the following corrected estimate state x̂new and covariance Pnew, where we recall that the new measurement is defined as znew =U>(s + ∆s):
x̂new = x̄new + K(znew − x̄new) (16) Pnew = (I−K)P̄new(I−K)> + KΣνK> (17)
K = P̄new(P̄new + Σν) −, (18)
where K is the Kalman gain and znew − x̄new is called the innovation. It is worth noting that Eq. (16) adjusts the predicted iterate x̄new in terms of the innovation, the key difference to GS-IMC and existing methods, e.g., GAT (Veličković et al., 2017) and SAGE (Hamilton et al., 2017).
Remark. The BGS-IMC approach is highly scalable in Paley-Wiener spaces. Let PWω(G) be the span of k ( n) eigenfunctions whose eigenvalues are no greater than ω, then the transition matrix F in (11) is k-by-n and every covariance matrix is of size k× k. Computationally, when P,Ση,Σν are diagonal, it takes O(k2) time to compute x̂new and Pnew, and O(nk) time for x̄new and P̄new. The total time complexity is O(nk + k2), linear to the number of vertices n. Further, Proposition 6 shows that x̂new in (16) is an unbiased and minimum-variance estimator. Proposition 6. Given an observation ∆s, provided F is known, x̂new obtained in Eq. (16) is the optimal linear estimator in the sense that it is unbiased and minimum-variance.
To summarize, the complete procedure of BGS-IMC is to first specify Ση,Σν ,P using prior knowledge, then to calculate extrapolated state x̄new using (14), and finally to obtain x̂new using (16) so that we have the updated model prediction as ŷnew = Ux̂new that ingests the new observation.
5 EXPERIMENT
This section evaluates GS-IMC (in Section 3) and BGS-IMC (in Section 4) on real-world datasets. All the experiments are conducted on the machines with Xeon 3175X CPU, 128G memory and P40 GPU with 24 GB memory. The source code and models will be made publicly available.
5.1 EXPERIMENTAL SETUP
We adopt three large real-world datasets widely used for evaluating recommendation algorithms: (1) Koubei (1, 828, 250 ratings of 212, 831 users and 10, 213 items); (2) Tmall (7, 632, 826 ratings of 320, 497 users and 21, 876 items); (3) Netflix (100, 444, 166 ratings of 400, 498 users and 17, 770 items). For each dataset, we follow the experimental protocols in (Liang et al., 2018; Wu et al., 2017a) for inductive top-N ranking, where the users are split into training/validation/test set with ratio 8 : 1 : 1. Then, we use all the data from the training users to optimize the model parameters. In the testing phase, we sort all interactions of the validation/test users in chronological order, holding out the last one interaction for testing and inductively generating necessary representations using the rest data. The results in terms of hit-rate (HR) and normalized discounted cumulative gain (NDCG) are reported on the test set for the model which delivers the best results on the validation set.
We implement our method in Apache Spark with Intel MKL, where matrix computation is parallelized and distributed. In experiments, we denote item-user rating matrix by R and further define the Laplacian Ł = I−D−1/2v RD−e R>D −1/2 v . We set a=4, γ=1, ϕ=10 for GS-IMC, while we set the covariance to Ση=Σν=10−4I and initialize P using the validation data for BGS-IMC. In the test stage, if a user has |Ω| training interactions, BGS-IMC uses first |Ω|−1 interactions to produce initial state x̂, then feed last interaction to simulate the online update.
In the literature, there are few of existing works that enable inductive inference for topN ranking only using the ratings. To make thorough comparisons, we prefer to strengthen IDCF with GCMC for the improved performance (IDCF+ for short) rather than report the results of IDCF (Wu et al., 2021) and GCMC (van den Berg et al., 2017) as individuals. Furthermore, we study their performance with different graph neural networks including ChebyNet (Defferrard et al., 2016), GAT (Veličković et al., 2017), GraphSage (Hamilton et al., 2017), SGC (Wu et al., 2019) and ARMA (Bianchi et al., 2021). We adopt the Adam optimizer (Kingma & Ba, 2015) with the learning rate decayed by 0.98 every epoch. We search by grid the learning rate and L2 regularizer in {0.1, 0.01, . . . , 0.00001}, the dropout rate over {0.1, 0.2, . . . , 0.7} and the latent factor size ranging {32, 64, . . . , 512} for the optimal performance. In addition, we also report the results of the shallow models i.e., MRCF (Steck, 2019) and SGMC (Chen et al., 2021) which are most closely related to our proposed method. The software provided by the authors is used in the experiments.
We omit the results of Markov chain Monte Carlo based FISM (He & McAuley, 2016), variational auto-encoder based MultVAE (Liang et al., 2018), scalable Collrank (Wu et al., 2017b), graph neural networks GCMC (van den Berg et al., 2017) and NGCF (Wang et al., 2019), as their accuracies were found below on par in SGMC (Chen et al., 2021) and IDCF (Wu et al., 2021).
5.2 ACCURACY COMPARISON
In this section, GS-IMC and BGS-IMC assume that the underlying signal is λ1000-bandlimited, and we compare them with eight state-of-the-arts graph based baselines, including spatial graph models (i.e., IDCF (Wu et al., 2021), IDCF+GAT (Veličković et al., 2017), IDCF+GraphSAGE (Hamilton et al., 2017)), approximate spectral graph models with high-order polynomials (i.e., IDCF+SGC (Wu et al., 2019), IDCF+ChebyNet (Defferrard et al., 2016), IDCF+ARMA (Bianchi et al., 2021)) and exact spectral graph models (i.e., MRFCF (Steck, 2019) and SGMC (Chen et al., 2021)).
In Table 2 and Table 3, the results on the real-world Koubei, Tmall and Netflix show that BGS-IMC outperforms all the baselines on all the datasets. Note that MRFCF (Steck, 2019) is the full rank version of GS-IMC with (one-step) random walk regularization. We can see that MRFCF underperforms its counterpart on all the three datasets, which demonstrates the advantage of the bandlimited assumption for inductive top-N ranking tasks. Further, BGS-IMC consistently outperforms GS-IMC on all three datasets by margin which proves the efficacy of the prediction-correction algorithm for incremental updates. Additionally, we provide extensive ablation studies in Appendix C, scalability studies in Appendix D and more comparisons with SOTA sequential models in Appendix E.
To summarize, the reason why the proposed method can further improve the prediction accuracy is due to 1) GS-IMC exploits the structural information in the 1-bit matrix to mitigate the negative influence of discrete label noise in the graph vertex domain; and 2) BGS-IMC further improves the prediction accuracy by considering continuous Gaussian noise in the graph Fourier domain and yielding unbiased and minimum-variance predictions using prediction-correction update algorithm.
6 CONCLUSION
We have introduced a unified graph signal sampling framework for inductive 1-bit matrix completion, together with theoretical bounds and insights. Specifically, GS-IMC is devised to learn the structural information in the 1-bit matrix to mitigate the negative influence of discrete label noise in the graph vertex domain. Second, BGS-IMC takes into account the model uncertainties in the graph Fourier domain and provides a prediction-correction update algorithm to obtain the unbiased and minimum-variance reconstructions. Both GS-IMC and BGS-IMC have closed-form solutions and are highly scalable. Experiments on the task of inductive top-N ranking have shown the supremacy.
A RELATED WORK
Inductive matrix completion. There has been a flurry of research on problem of inductive matrix completion (Chiang et al., 2018; Jain & Dhillon, 2013; Xu et al., 2013; Zhong et al., 2019), which leverage side information (or content features) in the form of feature vectors to predict inductively on new rows and columns. The intuition behind this family of algorithms is to learn mappings from the feature space to the latent factor space, such that inductive matrix completion methods can adapt to new rows and columns without retraining. However, it has been recently shown (Zhang & Chen, 2020; Ledent et al., 2021; Wu et al., 2021) that inductive matrix completion methods provide limited performance due to the inferior expressiveness of the feature space. On the other hand, the prediction accuracy has strong constraints on the content quality, but in practice the high quality content is becoming hard to collect due to legal risks (Voigt & Von dem Bussche, 2017). By contrast, one advantage of our approach is the capacity of inductive learning without using side information.
Graph neural networks. Inductive representation learning over graph structured data has received significant attention recently due to its ubiquitous applicability. Among the existing works, GraphSAGE (Hamilton et al., 2017) and GAT (Veličković et al., 2017) propose to generate embeddings for previously unseen data by sampling and aggregating features from a node’s local neighbors. In the meantime, various approaches such as ChebyNet (Defferrard et al., 2016) and GCN (Kipf & Welling, 2016) exploit convolutional neural networks to capture sophisticated feature information but are generally less scalable. To address the scalability issue, Wu et al. (2019) develop simplified graph convolutional networks (SGCN) which utilize polynomial filters to simulate the stacked graph convolutional layers. Furthermore, Bianchi et al. (2021) extend auto-regressive moving average (ARMA) filters to convolutional layers for broader frequency responses.
To leverage recent advance in graph neural networks, lightGCN (He et al., 2020), GCMC (van den Berg et al., 2017) and PinSAGE (Ying et al., 2018) represent the matrix by a bipartite graph then generalize the representations to unseen nodes by summing the content-based embeddings over the neighbors. Differently, IGMC (Zhang & Chen, 2020) trains graph neural networks which encode the subgraphs around an edge into latent factors then decode the factors back to the value on the edge. Recently, IDCF (Wu et al., 2021) studies the problem in a downsampled homogeneous graph (i.e., user-user graph in recommender systems) then applies attention networks to yield inductive representations. Probably most closely related to our approach are IDCF (Wu et al., 2021) and IGMC (Zhang & Chen, 2020) which do not assume any side information, such as user profiles and item properties. The key advantage of our approach is not only the closed form solution for efficient GNNs training, but also the theoretical results which guarantee the reconstruction of unseen rows and columns and the practical guidance for potential improvements.
Graph signal sampling. In general, graph signal sampling aims to reconstruct real-valued functions defined on the vertices (i.e., graph signals) from their values on certain subset of vertices. Existing approaches commonly build upon the assumption of bandlimitedness, by which the signal of interest lies in the span of leading eigenfunctions of the graph Laplacian (Pesenson, 2000; 2008). It is worth noting that we are not the first to consider the connections between graph signal sampling and matrix completion, as recent work by Romero et al. (Romero et al., 2016) has proposed a unifying kernel based framework to broaden both of graph signal sampling and matrix completion perspectives. However, we argue that Romero’s work and its successors (Benzi et al., 2016; Mao et al., 2018; McNeil et al., 2021) are orthogonal to our approach as they mainly focus on real-valued matrix completion in the transductive manner. Specifically, our approach concerns two challenging problems when connecting the ideas and methods of graph signal sampling with inductive one-bit matrix completion — one-bit quantization and online learning.
To satisfy the requirement of online learning, existing works learn the parameters for new rows and columns by performing either stochastic gradient descent used in MCEX (Giménez-Febrer et al., 2019), or alternating least squares used in eALS (He et al., 2016). The advantage of BGS-IMC is three fold: (i) BGS-IMC has closed form solutions, bypassing the well-known difficulty for tuning
learning rate; and (ii) BGS-IMC considers the random Gaussian noise in the graph Fourier domain, characterizing the uncertainties in the measurement and modeling; (iii) prediction-correction algorithm, resembling Kalman filtering, can provide unbiased and minimum-variance reconstructions.
Probably most closely related to our approach are SGMC (Chen et al., 2021) and MRFCF (Steck, 2019) in the sense that both of them formulate their solutions as spectral graph filters and can be regarded as methods for data filtering in domains of discrete signal processing. More specifically, SGMC optimizes latent factors V,U by minimizing the normalized matrix reconstruction error:
min U,V ‖ D−1/2v RD−1/2e −VU ‖, s.t. ‖ U ‖≤ , ‖ V ‖≤ η, (19)
while MRFCF minimizes the following matrix reconstruction error:
min X ‖ R−XR ‖ +λ ‖ X ‖ s.t. diag(X) = 0, (20)
where the diagonal entries of parameter X is forced to zero. It is obvious now that both SGMC and MRFCF focus on minimizing the matrix reconstruction problem. This is one of the key differences to our graph signal sampling framework which optimizes the functional minimization problem as defined in Eq. 5. We argue that our problem formulation is more suitable for the problem of inductive one-bit matrix completion, since it focuses on the reconstruction of bandlimited functions, no matter if the function is observed in the training or at test time. Perhaps more importantly, both of methods (Chen et al., 2021; Steck, 2019) can be included as special cases of our framework. We believe that a unified framework cross graph signal sampling and inductive matrix completion could benefit both fields, since the modeling knowledge from both domains can be more deeply shared.
Advantages of graph signal sampling perspectives. A graph signal sampling perspective requires to model 1-bit matrix data as signals on a graph and formulate the objective in the functional space. Doing so opens the possibility of processing, filtering and analyzing the matrix data with vertexfrequency analysis (Hammond et al., 2011; Shuman et al., 2013), time-variant analysis (Mao et al., 2018; McNeil et al., 2021), smoothing and filtering (Kalman, 1960; Khan & Moura, 2008) etc. In this paper, we technically explore the use of graph spectral filters to inductively recover the missing values of matrix, Kalman-filtering based approach to deal with the streaming data in online learning scenario, and vertex-frequency analysis to discover the advantages of dynamic BERT4REC model over static BGS-IMC model. We believe that our graph signal sampling framework can serve as a new paradigm for 1-bit matrix completion, especially in large-scale and dynamic systems.
B GENERALIZING SGMC AND MRFCF
This section shows how GS-IMC generalizes SGMC (Chen et al., 2021) and MRFCF (Steck, 2019).
GS-IMC generalizes SGMC. Given the observation R, we follow standard routine of hypergraph (Zhou et al., 2007) to calculate the hypergraph Laplacian matrix Ł = I −D−1/2v RD−e R>D −1/2 v , where Dv (De) is the diagonal degree matrix of vertices (edges). Then the rank-k approximation (see Eq. (9) in (Chen et al., 2021)) is equivalent to our result using bandlimited norm R(λ) = 1 if λ ≤ λk and R(λ) =∞ otherwise,
ŷ = (∑
l
( 1 +R(λl)/ϕ ) ulu > l )− s = ∑ l≤k ulu > l s = UkU > k s
where we set ϕ = ∞ and limϕ→∞R(λ)/ϕ = ∞ for λ > λk, and matrix Uk comprises k leading eigenvectors whose eigenvalues are less than or equal to λk.
GS-IMC generalizes MRFCF. Given R, we simply adopt the correlation relationship to construct the affinity matrix and define the Laplacian as Ł = 2I − D−1/2v RR>D−1/2v . Then the matrix approximation (see Eq. (4) in (Steck, 2019)) is equivalent to our GS-IMC approach using one-step
random walk norm,
ŷ = (∑
l
( 1 + 1
a− λ
) ulu > l )− s
= ∑ l ( 1− 1 a− λ+ 1 ) ulu > l s
= { I− ( (a+ 1)I− Ł )−} s
= { I− (
(a− 1)I + D1/2v RR >D1/2v
)−} s
where we set ϕ = 1 and a ≥ λmax is a pre-specified parameter for the random walk regularization.
C ABLATION STUDIES
This study evaluates how GS-IMC and BGS-IMC perform with different choice of the regularization function and the graph definition. In the following, we assume the underlying signal to recover is in the Paley-Wiener space PWλ1000(G), and hence we only take the first 1000 eigenfunctions whose eigenvalues are not greater than λ1000 to make predictions.
C.1 IMPACT OF REGULARIZATION FUNCTIONS
Table 4 and 5 show that for the proposed GS-IMC models, Tikhonov regularization produces the best HR and NDCG results on both Koubei and Netflix, while Diffusion process regularization performs the best on Tmall. Meanwhile, BGS-IMC with random walk regularization achieves the best HR and NDCG results on Koubei, while Tikhonov regularization and Diffusion process regularization are best on Tmall and Netflix. Perhaps more importantly, BGS-IMC consistently outperforms GS-IMC on all three datasets by margin which proves the efficacy of the prediction-correction algorithm.
We highlight the reason why BGS-IMC can further improve the performance of GS-IMC is due to the fact that BGS-IMC considers Gaussian noise in the Fourier domain and the prediction-correction update algorithm is capable of providing unbiased and minimum-variance predictions.
C.2 IMPACT OF GRAPH DEFINITIONS
Table 6 present the HR and NDCG results of GS-IMC with one-step random walk regularization on the Netflix prize data. To avoid the clutter, we omit the results of GS-IMC with other regularization functions, since their results share the same trends. It seems that the regular graph that use covariance matrix as the affinity matrix has better HR and NDCG results when recommending 10 and 50 items, while the hypergraph helps achieve better results when recommending 100 items.
D SCALABILITY STUDIES
The solution for either GS-IMC or BGS-IMC requires to compute leading eigenvetors whose eigenvalues are less than or equal to pre-specified ω. However, one might argue that it is computationally intractable on the industry-scale datasets. To address the concerns, one feasible approach is to perform the Nyström (Fowlkes et al., 2004) method to obtain the leading eigenvectors. For the completeness of the paper, we present the pseudo-code of the approximate eigendecomposition (Chen et al., 2021) in Algorithm 1, of which the computational complexity is O(lnk + k3) where n is the number of columns in Ł, l is the number of sampled columns and k is the number of eigenvectors to compute. This reduces the overhead from O(n3) to O(lnk + k3), linear to the number of rows. To evaluate how the proposed GS-IMC and BGS-IMC methods perform with the approximate eigenvectors, we conduct the experiments on the largest Netflix prize data. Table 7 reports the HR, NDCG and runtime results for the standard GS-IMC and BGS-IMC methods, and their scalable versions entitled GS-IMCs and BGS-IMCs. To make the comparison complete, we also present the results of neural IDCF (Wu et al., 2021) model equipped with ChebyNet (Defferrard et al., 2016). It is obvious that the standard GS-IMC and BGS-IMC methods consume only a small fraction of training time, required by graph neural networks. Meanwhile, GS-IMCs achieves comparable ranking
Algorithm 1 Approximate Eigendecomposition Require: n × l matrix C derived from l columns sampled from n × n kernel matrix L without
replacement, l × l matrix A composed of the intersection of these l columns, l × l matrix W, rank k, the oversampling parameter p and the number of power iterations q.
Ensure: approximate eigenvalues Σ̃ and eigenvectors Ũ. 1: Generate a random Gaussian matrix Ω ∈ Rl×(k+p), then compute the sample matrix AqΩ. 2: Perform QR-Decomposition on AqΩ to obtain an orthonormal matrix Q that satisfies the equa-
tion AqΩ = QQ>AqΩ, then solve ZQ>Ω = Q>WΩ. 3: Compute the eigenvalue decomposition on the (k + p)-by-(k + p) matrix Z, i.e., Z =
UZΣZUZ >, to obtain UW = QUZ [:, : k] and ΣW = ΣZ [: k, : k].
4: Return Σ̃← ΣW , Ũ← CA−1/2UWΣ−1/2W .
performance to GS-IMC, while improving the efficiency by 8X. Likewise, BGS-IMCs enjoys the improvement in the system scalability without significant loss in prediction accuracy. The overall results demonstrate that GS-IMC and BGS-IMC are highly scalable in very large data.
E SPECTRUM ANALYSIS AND DISCUSSION WITH SEQUENTIAL MODELS
We compare BGS-IMC with recent sequential recommendation models, including Transformer-based SASREC (Kang & McAuley, 2018), BERT-based BERT4REC (Sun et al., 2019) and causal CNN based GREC (Yuan et al., 2020). We choose the embedding size of 256 and search the optimal hyper-parameters by grid. Each model is configured using the
same parameters provided by the original paper i.e., two attention blocks with one head for SASREC, three attention blocks with eight heads for BERT4REC and six dilated CNNs with degrees 1, 2, 2, 4, 4, 8 for GREC.
Table 8 presents HR and NDCG results on Koubei for inductive top-N ranking. Note that BGS-IMC only accepts the most recent behavior to update the obsolete state for incremental learning, whereas SASREC, BERT4REC and GREC focus on modeling the dynamic patterns in the sequence. Hence, such a comparison is not in favor of BGS-IMC. Interestingly, we see that static BGS-IMC achieves comparable HR results to SOTA sequential models, while consuming a small fraction of running time. From this viewpoint, BGS-IMC is more cost-effective than the compared methods.
To fully understand the performance gap in NDCG, we analyze GS-IMC, BGS-IMC and the best baseline BERT4REC in the graph spectral domain, where we limit the `2 norm of each user’s spectral signals to one and visualize their averaged values in Figure 4. As expected, the energy of GS-IMC and BGS-IMC is concentrated on the low frequencies, since the high-frequency functions are highly penalized during minimization. Furthermore, the proposed prediction-correction update algorithm increases the energy of high-frequency functions. This bears a similarity with BERT4REC of which high-frequency functions are not constrained and can aggressively raise the rankings of unpopular items. This explains why BERT4REC and BGS-IMC have better NDCGs than GS-IMC.
F LIMITATION AND FUTURE WORK
Limitation on sequence modeling. The proposed BGS-IMC method is simple and cannot capture the sophisticated dynamics in the sequence. However, we believe that our work opens the possibility of benefiting sequential recommendation with graph signal processing techniques, for example extended Kalman filter, KalmanNet and Particle filter.
Limitation on sample complexity. The sample complexity is not provided in the paper, and we believe that this is an open problem due to the lack of regularity in the graph which prevent us from defining the idea of sampling “every other node” (the reader is referred to (Anis et al., 2016; Ortega et al., 2018) for more details).
Future work on deep graph learning. Though GS-IMC and BGS-IMC are mainly compared with neural graph models, we note that our approach can help improve the performance of existing graph neural networks including GAT (Veličković et al., 2017) and SAGE (Hamilton et al., 2017), etc. We summarize the following directions for future works: 1) It is interesting to see how GS-IMC takes advantage of content features. One feasible idea is to use GS-IMC as multi-scale wavelets which
can be easily adapted to graph neural networks; 2) BGS-IMC can also be utilized to optimize the aggregation module for the improved robustness, as every neighbor’s representation can be viewed as a measurement of the query node’s representation.
G PROOF OF THEOREM 4
Proof. This proof is analogous to Theorem 1.1 in (Pesenson, 2009), where we extend their results from Sobolev norm to a broader class of positive, monotonically increasing functionals.
Proof of the first part of the Theorem 4.
Suppose that the Laplacian operator Ł has bounded inverse and the fitting error = 0, if y ∈ PWω(G) and ŷk interpolate y on a set Ω = V − Ωc and Ωc admits the Poincare inequality ‖ φ ‖≤ Λ ‖ Łφ ‖ for any φ ∈ L2(Ωc). Then y − ŷk ∈ L2(Ωc) and we have
‖y − ŷk‖ ≤ Λ‖Ł(y − ŷk)‖.
At this point, we can apply Lemma 7 with Λ = a and φ = y− ŷk. It gives the following inequality
‖ y − ŷk ‖≤ Λk ‖ Łk(y − ŷk) ‖
for all k = 2l, l = 0, 1, 2, . . . Since R(λ) is positive and monotonically increasing function, it gives
Λk ‖ Łk(y − ŷk) ‖≤ Λk ‖ R(Ł)k(y − ŷk) ‖ .
Because the interpolant ŷk minimize the norm ‖ R(Ł)k · ‖, we have
‖ R(Ł)k(y − ŷk) ‖≤‖ R(Ł)ky ‖ + ‖ R(Ł)kŷk ‖≤ 2 ‖ R(Ł)ky ‖ .
As for functions y ∈ PWω(G) ⊂ PWR(ω)(G) the Bernstein inequality in Lemma 8 holds
‖ R(Ł)ky ‖≤ R(ω)k ‖ y ‖, k ∈ N.
Putting everything together, we conclude the first part of Theorem 4: ‖ y − ŷk ‖≤ 2 ( ΛR(ω) )k ‖ y ‖,ΛR(ω) < 1, k = 2l, l ∈ N (21)
Proof of the second part of the Theorem 4.
Since ΛR(ω) < 1 holds, it gives the following limit
lim k→∞ (ΛR(ω))k = 0 and lim k→∞ ‖ y − ŷk ‖≤ 0
With the non-negativity of the norm, we have
‖y − ŷk‖ ≥ 0. (22)
This implies the second part of the Theorem 4:
y = lim k→∞ ỹk. (23)
Lemma 7 (restated from Lemma 4.1 in (Pesenson, 2009)). Suppose that Ł is a bounded selfadjoint positive definite operator in a Hilbert space L2(G), and ‖ φ ‖≤ a ‖ Łφ ‖ holds true for any φ ∈ L2(G) and a positive scalar a > 0, then for all k = 2l, l = 0, 1, . . . , the following inequality holds true
‖ φ‖ ≤ ak‖Łkφ ‖ . (24)
Lemma 8 (restated from Theorem 2.1 in (Pesenson, 2008)). A function f ∈ L2(G) belongs to PWω(G) if and only if the following Bernstein inequality holds true for all s ∈ R+
‖ Łsy ‖≤ ωs ‖ y ‖ . (25)
G.1 EXTRA DISCUSSION
In (Pesenson, 2008), the complementary set S = Ωc = V − Ω which admits Poincare inequality is called the Λ-set. Theorem 4 in our paper and Theorem 1.1 in (Pesenson, 2009) state that bandlimited functions y ∈ PWω can be reconstructed from their values on a uniqueness set Ω = V −S. To better understand the concept of Λ-set, we restate Lemma 9 from (Pesenson, 2008) which presents the conditions for Λ-set. It is worth pointing out that (i) the second condition suggests that the vertices from Λ-set would likely be sparsely connected with the uniqueness set Ω; and (ii) the vertices in Λ-set are disconnected with each other or isolated in the subgraph constructed by the vertices S, otherwise there always exists a non-zero function φ ∈ L2(S), ‖ φ ‖6= 0 which makes ‖ Łφ ‖= 0. Lemma 9 (restated from Lemma 3.6 in (Pesenson, 2008)). Suppose that for a set of vertices S ⊂ V (finite or infinite) the following holds true:
1. every point from S is adjacent to a point from the boundary bS, the set of all vertices in V which are not in S but adjacent to a vertex in S;
2. for every v ∈ S there exists at least one adjacent point uv ∈ bS whose adjacency set intersects S only over v;
3. the number Λ = supv∈s d(v) is finite;
Then the set S is a Λ-set which admits the Poincare inequality ‖ φ ‖≤ Λ ‖ Łφ ‖, φ ∈ L2(S). (26)
In our experiments for recommender systems, each user’s ratings might not comply with Poincare inequality. This is because there exists some users who prefer niche products/movies (low-degree nodes). As shown in Fig. 2, user preferences on low-degree nodes are determined by high-frequency functions. When R(ω) is not large enough, Poincare inequality does not hold for such users. This also explains why our model performs poorly for cold items.
Regarding to choice of parameter k, empirical results show that using k ≥ 2 does not help improve the performance, and note that when k is large enough, all kernels will be reduced to bandlimited norm, i.e., R(λ) = 1 if λ ≤ λk ≤ 1, since the gap between eigenvalues shrinks.
H PROOF OF THEOREM 5
Proof. Let ξ denote the random label noise which flips a 1 to 0 with rate ρ, assume that the sample s = y + ξ is observed from y under noise ξ, then for a graph spectral filter Hϕ = (I +R(Ł)/ϕ)−1 with positive ϕ > 0, we have
E [ MSE(y, ŷ) ] = 1
n E ‖ y −Hϕ(y + ξ) ‖2
≤ 1 n E ‖ Hϕξ ‖2 + 1 n ‖ (I−Hϕ)y ‖2, (27)
where the last inequality holds due to the triangular property of matrix norm.
To bound E ‖ Hϕξ ‖2, let Cn = R1/2(ω) ‖ y ‖, then
E ‖ Hϕξ ‖2 (a) = ∑ y(v)=1 ρ(Hϕ,(∗,v) ×−1)2 + (1− ρ)(Hϕ,(∗,v) × 0)2
= ρ ∑
y(v)=1
(Hϕ,(∗,v)y(v)) 2 = ρ ‖ Hϕy ‖2
(b) ≤ sup ‖R1/2(Ł)y‖≤Cn ρ ‖ Hϕy ‖2= sup ‖z‖≤Cn ρ ‖ HϕR−1/2(Ł)z ‖2
= ρC2nσ 2 max ( HϕR −1/2(Ł) )
= ρC2n max l=1,...,n
1 (1 +R(λl)/ϕ)2 1 R(λl)
≤ ρϕ 2C2n
R(λ1)(ϕ+R(λ1))2 , (28)
where (a) follows the definition of the flip random noise ξ and (b) holds to the fact that y is in the Paley-Wiener space PWω(G). As for the second term,
‖ (I−Hϕ)y ‖2 ≤ sup ‖R1/2(Ł)y‖≤Cn ‖ (I−Hϕ)y ‖2
(a) = sup ‖z‖≤Cn ‖ (I−Hϕ)R−1/2(Ł)z ‖2
= C2nσ 2 max ( (I−Hϕ)R−1/2(Ł) ) = C2n max
l=1,...,n
( 1− 1
1 +R(λl)/ϕ )2 1 R(λl)
= C2n ϕ max l=1,...,n
R(λl)/ϕ
(R(λl)/ϕ+ 1)2
(b) ≤ C 2 n
4ϕ . (29)
where (a) holds due to the fact that the eigenvectors of I−Hϕ are the eigenvectors of R(Ł); and (b) follows the simple upper bound x/(1 + x)2 ≤ 1/4 for x ≥ 0. By combing everything together, we conclude the result
E [ MSE(y, ŷ) ] ≤ C 2 n
n ( ρϕ2 R(λ1)(ϕ+R(λ1))2 + 1 4ϕ ) . (30)
H.1 EXTRA DISCUSSION
Choosing ϕ to balance the two terms on the right-hand side above gives ϕ∗ = ∞ for ρ < 1/8 and 1 +R(λ1)/ϕ ∗ = 2ρ1/3 for ρ ≥ 1/8. Plugging in this choice, we have the upper bound if ρ ≥ 18
E [ MSE(y, ŷ) ] ≤ C 2 n
4R(λ1)n (3ρ1/3 − 1), (31)
and if ρ < 18 , then the upper bound is
E [ MSE(y, ŷ) ] ≤ C 2 nρ
4R(λ1)n . (32)
This result implies that we can use a large ϕ to obtain accurate reconstruction when the flip rate ρ is not greater than 1/8, and ϕ need to be carefully tuned when the flip rate ρ is greater than 1/8.
I PROOF OF PROPOSITION 6
As below we present the proof in a Bayesian framework, and the reader is referred to (Maybeck, 1982) for a geometrical interpretation of Monte Carlo estimate statistics.
Proof of the minimal variance
To minimize the estimate variance, we need to minimize the main diagonal of the covariance Pnew: trace ( Pnew ) = trace ( (I−K)P̄new(I−K)> + KΣµK> ) .
Then, we differentiate the trace of Pnew with respect to K d trace ( Pnew ) d K = trace ( 2KP̄new − 2P̄new ) + trace ( 2KΣu ) .
The optimal K which minimizes the variance should satisfy d trace(Pnew)/d K = 0, then it gives
K(I + P̄new) = P̄new.
This implies that the variance of estimate x̂new is minimized when K = P̄new(I + P̄new)−.
Proof of the unbiasedness
Suppose that the obsolete estimate x̂ is unbiased, i.e. Ex̂ = x, then using Eq. (11) we have
E ( x̄new ) = E ( x̂ + F∆s ) = x + F∆s = xnew.
Because of Eq. (12) and the measurement noise ν has zero mean, it gives E ( znew ) = E ( xnew + ν ) = xnew.
Putting everything together, we conclude the following result E ( x̂new ) = E ( x̄new + K(znew − x̄new) ) = xnew + K(xnew − xnew) = xnew. (33)
This implies that the estimate state x̂new is unbiased.
J IMPLEMENTATION DETAILS
In this section, we present the details for our implementation in Section 5 including the additional dataset details, evaluation protocols, model architectures in order for reproducibility. All the experiments are conducted on the machines with Xeon 3175X CPU, 128G memory and P40 GPU with 24 GB memory. The configurations of our environments and packages are listed below:
• Ubuntu 16.04 • CUDA 10.2 • Python 3.7 • Tensorflow 1.15.3 • Pytorch 1.10 • DGL 0.7.1 • NumPy 1.19.0 with MKL Intel
J.1 ADDITIONAL DATASET DETAILS
We use three real-world datasets which are processed in line with (Liang et al., 2018; Steck, 2019): (1) for Koubei2, we keep users with at least 5 records and items that have been purchased by at least 100 users; and (2) for Tmall3, we keep users who click at least 10 items and items which have been seen by at least 200 users; and (3) for Netflix4, we keep all of the users and items. In addition, we chose the random seed as 9876 when splitting the users into training/validation/test sets.
2https://tianchi.aliyun.com/dataset/dataDetail?dataId=53 3https://tianchi.aliyun.com/dataset/dataDetail?dataId=35680 4https://kaggle.com/netflix-inc/netflix-prize-data
J.2 EVALUATION PROTOCOLS
In Figure 5, we illustrate the difference between the transductive ranking and inductive ranking evaluation protocols. In the transductive ranking problem, the model performance is evaluated on the users already known during the model training, whereas the model performance is evaluated on the unseen users in the inductive ranking problems. It is worth noting that in the testing phrase, we sort all interactions of the validation/test users in chronological order, holding out the last one interaction for testing and inductively generating necessary representations on the rest data. In a nutshell, we evaluate our approach and the baselines for the challenging inductive next-item prediction problem.
J.3 EVALUATION METRICS
We adopt hit-rate (HR) and normalized discounted cumulative gain (NDCG) to evaluate the model performance. Suppose that the model provideN recommended items for user u asRu, let Tu denote the interacted items of the user, then HR is computed as follows:
HR@N = Eu 1|Tu∩Ru| (34)
where 1|Ω| is equal to 1 if set Ω is not empty and is equal to 0 otherwise. NDCG evaluates ranking performance by taking the positions of correct items into consideration:
NDCG@N = 1
Z DCG@N =
1
Z N∑ j=1 21|R j u∩Tu| − 1 log2(j + 1) (35)
where Z is the normalized constant that represents the maximum values of DCG@N for Tu.
J.4 GRAPH LAPLACIAN
Let R denote the item-user rating matrix, Dv and De denotes the diagonal degree matrix of vertices and edges respectively, then graph Laplacian matrix used in our experiments is defined as follows:
Ł = I−D−1/2v RD−e R>D−1/2v . (36)
where I is identity matrix.
J.5 DISCUSSION ON PREDICTION FUNCTIONS
In experiments, we focus on making personalized recommendations to the users, so that we are interested in the ranks of the items for each user. Specifically, for top-k ranking problem we choose the items with the k-largest predicted ratings,
Recommendation@k = max |O|=k ∑ v∈O,v/∈Ω+ y(v). (37)
More importantly, our proposed method is also suitable for the link prediction problem, where the goal is classify whether an edge between two vertices exists or not. This can be done by choosing a splitting point to partition the candidate edges into two parts. There are many different ways of choosing such splitting point. One can select the optimal splitting point based on the ROC or AUC results on the validation set.
J.6 MODEL ARCHITECTURES
As mentioned before, we equip IDCF (Wu et al., 2021) with different GNN architectures as the backbone. Here we introduce the details for them.
GAT. We use the GATConv layer available in DGL for implementation. The detailed architecture description is as below:
• A sequence of one-layer GATConv with four heads. • Add self-loop and use batch normalization for graph convolution in each layer.
• Use tanh as the activation. • Use inner product between user embedding and item embedding as ranking score.
GraphSAGE. We use the SAGEConv layer available in DGL for implementation. The detailed architecture description is as below:
• A sequence of two-layer SAGEConv. • Add self-loop and use batch normalization for graph convolution in each layer. • Use ReLU as the activation. • Use inner product between user embedding and item embedding as ranking score.
SGC. We use the SGConv layer available in DGL for implementation. The detailed architecture description is as below:
• One-layer SGConv with two hops. • Add self-loop and use batch normalization for graph convolution in each layer. • Use ReLU as the activation. • Use inner product between user embedding and item embedding as ranking score.
ChebyNet. We use the ChebConv layer available in DGL for implementation. The detailed architecture description is as below:
• One-layer ChebConv with two hops. • Add self-loop and use batch normalization for graph convolution in each layer. • Use ReLU as the activation. • Use inner product between user embedding and item embedding as ranking score.
ARMA. We use the ARMAConv layer available in DGL for implementation. The detailed architecture description is as below:
• One-layer ARMAConv with two hops. • Add self-loop and use batch normalization for graph convolution in each layer. • Use tanh as the activation. • Use inner product between user embedding and item embedding as ranking score.
We also summarize the implementation details of the compared sequential baselines as follows.
SASREC.5 We use the software provided by the authors for experiments. The detailed architecture description is as below:
• A sequence of two-block Transformer with one head. • Use maximum sequence length to 30. • Use inner product between user embedding and item embedding as ranking score.
BERT4REC.6 We use the software provided by the authors for experiments. The detailed architecture description is as below:
• A sequence of three-block Transformer with eight heads. • Use maximum sequence length to 30 with the masked probability 0.2. • Use inner product between user embedding and item embedding as ranking score.
5https://github.com/kang205/SASRec 6https://github.com/FeiSun/BERT4Rec
GREC.7 We use the software provided by the authors for experiments. The detailed architecture description is as below:
• A sequence of six-layer dilated CNN with degree 1, 2, 2, 4, 4, 8. • Use maximum sequence length to 30 with the masked probability 0.2. • Use inner product between user embedding and item embedding as ranking score.
7https://github.com/fajieyuan/WWW2020-grec | 1. What is the focus and contribution of the paper on graph signal sampling for matrix completion/recommendation systems?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its motivation, formulation, and experimental evaluations?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any minor errors or typos in the paper that need to be addressed? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors propose a graph signal sampling approach to matrix completion/recommendation systems. They propose regularization approaches for noise reduction, and also provide a Bayesian extension that takes into account model uncertainty. They show that their approaches are scalable and provide both theoretical guarantees as well as experimental evaluations.
Strengths And Weaknesses
Strengths:
The study cited many of the related prior art in a very comprehensive manner
The core idea of the graph signal sampling approach + regularization is very natural, and admit simple closed form solutions
reasonable error analysis is given for the method
extensive experiments, with good results obtained for the proposed method against competitors
Weaknesses:
the paper jumps directly into describing the problem of inductive 1-bit matrix completion. The motivation is insufficient in my opinion given that this is a very specific problem. I suggest that the authors spend a paragraph at the beginning talking about WHY the 1-bit formulation is helpful and why inductive matrix completion is useful so a broader audience can be reached.
The "Bayesian" formulation, is not described clearly and sufficiently. The authors jump straight to describing a stochastic filtering problem and their prediction correction algorithm. There is minimal description of notation, and there is 0 discussion of model choice and why this model is sensible and as well as the choice of parameters such as Sigma_nu and Sigma_eta. For those who are not familiar with this model/literature, it can be very confusing what the prior is and why the proposed filtering algorithm works.
Without more significant exposition for the Bayesian model, the current Bayesian model reads more like a distraction from the main theme of the paper.
I think the paper's readability and clarity would be greatly improved if the authors address the two issues above.
Minor Typos:
I think in the first paragraph of the first page, "a subset of positive examples phi randomly sampled from {(i, j) | ... .}" I think inside the set, in addition to j \leq m, there should also be i \leq n
Clarity, Quality, Novelty And Reproducibility
The paper is written moderately clearly. To a person with less experience in related areas, this paper could be very difficult to follow, given their very condensed style of presentation. I gave suggestions to the authors to improve their exposition.
the idea is original, in the sense that there is nothing ground-breaking that has been proposed, but the authors have managed to combine lots of different existing ideas from different sub-fields to come up with something reasonably new. |
ICLR | Title
Graph Signal Sampling for Inductive One-Bit Matrix Completion: a Closed-form Solution
Abstract
Inductive one-bit matrix completion is motivated by modern applications such as recommender systems, where new users would appear at test stage with the ratings consisting of only ones and no zeros. We propose a unified graph signal sampling framework which enjoys the benefits of graph signal analysis and processing. The key idea is to transform each user’s ratings on the items to a function (graph signal) on the vertices of an item-item graph, then learn structural graph properties to recover the function from its values on certain vertices — the problem of graph signal sampling. We propose a class of regularization functionals that takes into account discrete random label noise in the graph vertex domain, then develop the GS-IMC approach which biases the reconstruction towards functions that vary little between adjacent vertices for noise reduction. Theoretical result shows that accurate reconstructions can be achieved under mild conditions. For the online setting, we develop a Bayesian extension, i.e., BGS-IMC which considers continuous random Gaussian noise in the graph Fourier domain and builds upon a predictioncorrection update algorithm to obtain the unbiased and minimum-variance reconstruction. Both GS-IMC and BGS-IMC have closed-form solutions and thus are highly scalable in large data as verified on public benchmarks.
1 INTRODUCTION
In domains such as recommender systems and social networks, only “likes” (i.e., ones) are observed in the system and service providers (e.g, Netflix) are interested in discovering potential “likes” for existing users to stimulate demand. This motivates the problem of 1-bit matrix completion (OBMC), of which the goal is to recover missing values in an n-by-m item-user matrix R∈ {0, 1}n×m. We note that Ri,j = 1 means that item i is rated by user j, but Ri,j = 0 is essentially unlabeled or unknown which is a mixture of unobserved positive examples and true negative examples.
However, in real world new users, who are not exposed to the model during training, may appear at testing stage. This fact stimulates the development of inductive 1-bit matrix completion, which aims to recover unseen vector y ∈ {0, 1}n from its partial positive entries Ω+ ⊆ {j|yj = 1} at test time. Fig. 1(a) emphasizes the difference between conventional and inductive approaches. More formally, let M∈{0, 1}n×(m+1) denote the underlying matrix, where only a subset of positive examples Ψ is randomly sampled from {(i, j)|Mi,j=1, i≤n, j≤m} such that Ri,j=1 for (i, j)∈Ψ and Ri,j=0 otherwise. Consider (m+1)-th column y out of matrix R, we likewise denote its observations si=1 for i ∈ Ω+ and si=0 otherwise. We note that the sampling process here assumes that there exists a random label noise ξ which flips a 1 to 0 with probability ρ, or equivalently s = y + ξ where
ξi = −1 for i ∈ {j|yj = 1} − Ω+, and ξi = 0 otherwise. (1) Fig. 1(a) presents an example of s,y, ξ to better understand their relationships.
Fundamentally, the reconstruction of true y from corrupted s bears a resemblance with graph signal sampling. Fig. 1(b) shows that the item-user rating matrix R can be used to define a homogeneous
∗Junchi Yan is the correspondence author who is also with Shanghai AI Laboratory. The work was in part supported by NSFC (62222607), STCSM (22511105100).
item-item graph (see Sec 3.1), such that user ratings y/s on items can be regarded as signals residing on graph nodes. The reconstruction of bandlimited graph signals from certain subsets of vertices (see Sec 2) has been extensively studied in graph signal sampling (Pesenson, 2000; 2008).
Despite popularity in areas such as image processing (Shuman et al., 2013; Pang & Cheung, 2017; Cheung et al., 2018) and matrix completion (Romero et al., 2016; Mao et al., 2018; McNeil et al., 2021), graph signal sampling appears less studied in the specific inductive one bit matrix completion problem focused in this paper (see Appendix A for detailed related works). Probably most closely related to our approach are MRFCF (Steck, 2019) and SGMC (Chen et al., 2021) which formulate their solutions as spectral graph filters. However, we argue that these methods are orthogonal to us since they focus on optimizing the rank minimization problem, whereas we optimize the functional minimization problem, thereby making it more convinient and straightforward to process and analyze the matrix data with vertex-frequency analysis (Hammond et al., 2011; Shuman et al., 2013), time-variant analysis (Mao et al., 2018; McNeil et al., 2021), smoothing and filtering (Kalman, 1960; Khan & Moura, 2008). Furthermore, (Steck, 2019; Chen et al., 2021) can be incorporated as special cases of our unified graph signal sampling framework (see Appendix B for detailed discussions).
Another emerging line of research has focused on learning the mapping from side information (or content features) to latent factors (Jain & Dhillon, 2013; Xu et al., 2013; Ying et al., 2018; Zhong et al., 2019). However, it has been recently shown (Zhang & Chen, 2020; Ledent et al., 2021; Wu et al., 2021) that in general this family of algorithms would possibly suffer inferior expressiveness when high-quality content is not available. Further, collecting personal data is likely to be unlawful as well as a breach of the data minimization principle in GDPR (Voigt & Von dem Bussche, 2017).
Much effort has also been made to leverage the advanced graph neural networks (GNN) for improvements. van den Berg et al. (2017) represent the data matrix R by a bipartite graph then generalize the representations to unseen nodes by summing the embeddings over the neighbors. Zhang & Chen (2020) develop graph neural networks which encode the subgraphs around an edge into latent factors then decode the factors back to the value on the edge. Besides, Wu et al. (2021) consider the problem in a downsampled homogeneous graph (i.e., user-user graph in recommender systems) then exploit attention networks to yield inductive representations. The key advantage of our approach is not only the closed form solution which takes a small fraction of training time required for GNNs, but also theory results that guarantee accurate reconstruction and provide guidance for practical applications.
We emphasize the challenges when connecting ideas and methods of graph signal sampling with inductive 1-bit matrix completion — 1-bit quantization and online learning. Specifically, 1-bit quantization raises challenges for formulating the underlying optimization problems: minimizing squared loss on the observed positive examples Ω+ yields a degenerate solution — the vector with all entries equal to one achieves zero loss; minimizing squared loss on the corrupted data s introduces the systematic error due to the random label noise ξ in Eq. (1). To address the issue, we represent the observed data R as a homogeneous graph, then devise a broader class of regularization functionals on graphs to mitigate the impact of discrete random noise ξ. Existing theory for total variation denoising (Sadhanala et al., 2016; 2017) and graph regularization (Belkin et al., 2004; Huang et al., 2011), which takes into account continuous Gaussian noise, does not sufficiently address recoverability in inductive 1-bit matrix completion (see Sec 3.4). We finally mange to derive a closed-form solution, entitled Graph Sampling for Inductive (1-bit) Matrix Completion GS-IMC which biases the reconstruction towards functions that vary little between adjacent vertices for noise reduction.
For online learning, existing matrix factorization methods (Devooght et al., 2015; Volkovs & Yu, 2015; He et al., 2016) incrementally update model parameters via gradient descent, requiring an expensive line search to set the best learning rate. To scale up to large data, we develop a Bayesian extension called BGS-IMC where a prediction-correction algorithm is devised to instantly refreshes the prediction given new incoming data. The prediction step tracks the evolution of the optimization problem such that the predicted iterate does not drift away from the optimum, while the correction step adjusts for the distance between current prediction and the new information at each step. The advantage over baselines is that BGS-IMC considers the uncertainties in the graph Fourier domain, and the prediction-correction algorithm can efficiently provide the unbiased and minimum-variance predictions in closed form, without using gradient descent techniques. The contributions are:
• New Inductive 1-bit Matrix Completion Framework. We propose and technically manage (for the first time to our best knowledge) to introduce graph signal sampling to inductive 1-bit matrix completion. It opens the possibility of benefiting the analysis and processing of the matrix with signal processing toolbox including vertex-frequency analysis (Hammond et al., 2011; Shuman et al., 2013), time-variant analysis (Mao et al., 2018; McNeil et al., 2021), smoothing and filtering (Kalman, 1960; Khan & Moura, 2008) etc. We believe that our unified framework can serve as a new paradigm for 1-bit matrix completion, especially in large-scale and dynamic systems. • Generalized Closed-form Solution. We derive a novel closed-form solution (i.e., GS-IMC) in the graph signal sampling framework, which incorporates existing closed-form solutions as special cases, e.g., (Chen et al., 2021; Steck, 2019). GS-IMC is learned from only positive data with discrete random noise. This is one of key differences to typical denoising methods (Sadhanala et al., 2016) where efforts are spent on removing continuous Gaussian noise from a real-valued signal. • Robustness Enhancement. We consider the online learning scenario and construct a Bayesian extension, i.e., BGS-IMC where a new prediction-correction algorithm is proposed to instantly yield unbiased and minimum-variance predictions given new incoming data. Experiments in Appendix E show that BGS-IMC is more cost-effective than many neural models such as SASREC (Kang & McAuley, 2018), BERT4REC (Sun et al., 2019) and GREC (Yuan et al., 2020). We believe that this proves a potential for the future application of graph signal sampling to sequential recommendation. • Theoretical Guarantee and Empirical Effectiveness. We extend Paley-Wiener theorem of (Pesenson, 2009) on real-valued data to positive-unlabelled data with statistical noise. The theory shows that under mild conditions, unseen rows and columns in training can be recovered from a certain subset of their values that is present at test time. Empirical results on real-world data show that our methods achieve state-of-the-art performance for the challenging inductive Top-N ranking tasks.
2 PRELIMINARIES
In this section, we introduce the notions and provide the necessary background of graph sampling theory. Let G = (V,E,w) denote a weighted, undirected and connected graph, where V is a set of vertices with |V | = n, E is a set of edges formed by the pairs of vertices and the positive weight w(u, v) on each edge is a function of the similarity between vertices u and v.
Space L2(G) is the Hilbert space of all real-valued functions f : V → R with the following norm: ‖ f ‖= √∑ v∈V |f(v)|2, (2)
and the discrete Laplace operator Ł is defined by the formula (Chung & Graham, 1997):
Łf(v) = 1√ d(v) ∑ u∈N (v) w(u, v) ( f(v)√ d(v) − f(u)√ d(u) ) , f ∈ L2(G)
where N (v) signifies the neighborhood of node v and d(v)= ∑ u∈N (v)w(u, v) is the degree of v.
Definition 1 (Graph Fourier Transform). Given a function or signal f in L2(G), the graph Fourier transform and its inverse (Shuman et al., 2013) can be defined as follows:
f̃G = U >f and f = Uf̃ , (3)
where U represents eigenfunctions of discrete Laplace operator Ł, f̃G denotes the signal in the graph Fourier domain and f̃G(λl)=〈f ,ul〉 signifies the information at the frequency λl1. Definition 2 (Bandlimiteness). f ∈L2(G) is called ω-bandlimited function if its Fourier transform f̃G has support in [0, ω], and ω-bandlimited functions form the Paley-Wiener space PWω(G). Definition 3 (Graph Signal Sampling). Given y ∈ PWω(G), y can be recovered from its values on the vertices Ω+ by minimizing below objective (Pesenson, 2000; 2008), with positive scalar k:
min f∈L2(G)
‖ Łkf ‖ s.t., f(v) = y(v), ∀v ∈ Ω+. (4)
Recall that the observation in inductive 1-bit matrix completion consists of only ones and no zeros (i.e., y(v) = 1 for v ∈ Ω+) and ‖ Łk1 ‖= 0. It is obvious that minimizing the loss on the observed entries corresponding to ones, produces a degenerate solution — the vector with all entries equal to one achieves zero loss. From this point of view, existing theory for sampling real-valued signals (Pesenson, 2000; 2008) is not well suited to the inductive 1-bit matrix completion problem.
3 CLOSED-FORM SOLUTION FOR 1-BIT MATRIX COMPLETION
This section builds a unified graph signal sampling framework for inductive 1-bit matrix completion that can inductively recover y from positive ones on set Ω+. The rational behind our framework is that the rows that have similar observations are likely to have similar reconstructions. This makes a lot of sense in practice, for example a user (column) is likely to give similar items (rows) similar scores in recommender systems. To achieve this, we need to construct a homogeneous graph G where the connected vertices represent the rows which have similar observations, so that we can design a class of graph regularized functionals that encourage adjacent vertices on graph G to have similar reconstructed values. In particular, we mange to provide a closed-form solution to the matrix completion problem (entitled GS-IMC), together with theoretical bounds and insights.
3.1 GRAPH DEFINITION
We begin with the introduction of two different kinds of methods to construct homogeneous graphs by using the zero-one matrix R ∈ Rn×m: (i) following the definition of hypergraphs (Zhou et al., 2007), matrix R can be regarded as the incidence matrix, so as to formulate the hypergraph Laplacian matrix as Ł = I − D−1/2v RD−e R>D −1/2 v where Dv ∈ Rn×n (De ∈ Rm×m) is the diagonal degree matrix of vertices (edges); and (ii) for regular graphs, one of the most popular approaches is to utilize the covariance between rows to form the adjacent matrix Ai,j = Cov(Ri,Rj) for i 6= j so that we can define the graph Laplacian matrix as Ł = I−D−1/2v AD−1/2v .
3.2 GRAPH SIGNAL SAMPLING FRAMEWORK
Given a graph G = (V,E), any real-valued column y ∈ Rn can be viewed as a function on G that maps from V to R, and specifically the i-th vector component yi is equivalent to the function value y(i) at the i-th vertex. Now it is obvious that the problem of inductive matrix completion, of which the goal is to recover column y from its values on entries Ω+, bears a resemblance to the problem of graph signal sampling that aims to recover function y from its values on vertices Ω+.
However, most of existing graph signal sampling methods (Romero et al., 2016; Mao et al., 2018; McNeil et al., 2021) yield degenerated solutions when applying them to the 1-bit matrix completion problem. A popular heuristic is to treat some or all of zeros as negative examples Ω−, then to recover y by optimizing the following functional minimization problem, given any k = 2l, l ∈ N:
min f∈L2(G)
‖ [R(Ł)]kf ‖ s.t., ‖ sΩ − fΩ ‖≤ (5)
1To be consistent with (Shuman et al., 2013), ul (l-th column of matrix U) is the l-th eigenvector associated with the eigenvalue λl, and the graph Laplacian eigenvalues carry a notion of frequency.
where recall that s = y + ξ is the observed data corrupted by discrete random noise ξ, and sΩ (fΩ) signifies the values of s (f ) only on Ω = Ω+ ∪Ω−; R(Ł) = ∑ lR(λl)ulu > l denotes the regularized Laplace operator in which {λl} and {ul} are respectively the eigenvalues and eigenfunctions of operator Ł. It is worth noting that s(i) = y(i) + ξ(i) = 0 for i ∈ Ω− is not the true negative data, and hence Ω− will introduce the systematic bias when there exists i ∈ Ω− so that y(i) = 1. The choice of regularization function R(λ) needs to account for two critical criteria: 1) The resulting regularization operator R(Ł) needs to be semi-positive definite. 2) As mentioned before, we expect the reconstruction ŷ to have similar values on adjacent nodes, so that the uneven functions should be penalized more than even functions. To account for this, we adopt the family of positive, monotonically increasing functions (Smola & Kondor, 2003) as present in Table 1.
To the end, we summarize two natural questions concerning our framework: 1) What are the benefits from introducing the regularized Laplacian penalty? It is obvious that minimizing the discrepancy between sΩ and fΩ does not provide the generalization ability to recover unknown values on the rest vertices V − Ω, and Theorem 4 and 5 answer the question by examining the error bounds. 2) What kind of R(Ł) constitutes a reasonable choice? It has been studied in (Huang et al., 2011) that R(Ł) is most appropriate if it is unbiased, and an unbiased R(Ł) reduces variance without incurring any bias on the estimator. We also highlight the empirical study in Appendix C that evaluates how the performance is affected by the definition of graph G and regularization function R(λ).
3.3 CLOSED-FORM SOLUTION
In what follows, we aim to provide a closed-form solution for our unified framework by treating all of the zeros as negative examples, i.e., s(v) = 1 for v ∈ Ω+ and s(v) = 0 otherwise. Then by using the method of Lagrange multipliers, we reformulate Eq. (5) to the following problem:
min f∈L2(G)
1 2 〈f , R(Ł)f〉+ ϕ 2 ‖s− f‖2 , (6)
where ϕ > 0 is a hyperparameter. Obviously, this problem has a closed-form solution:
ŷ = ( I +R(Ł)/ϕ )− s = (∑ l ( 1 +R(λl)/ϕ ) ulu > l )− s = H(Ł)s, (7)
whereH(Ł) = ∑ lH(λl)ulu > l with kernel 1/H(λl) = 1+R(λ)/ϕ, and we exemplifyH(λ) when ϕ = 1 in Table 1. From the viewpoint of spectral graph theory, our GS-IMC approach is essentially a spectral graph filter that amplifies(attenuates) the contributions of low(high)-frequency functions.
Remark. To understand low-frequency and high-frequency functions, Figure 2 presents case studies in the context of recommender systems on the Netflix prize data (Bennett et al., 2007). Specifically, we divide the vertices (items) into four classes: very-high degree (> 5000), high degree (> 2000), medium degree (> 100) and low degree vertices. Then, we report the recall results of all the four classes in different Paley-Wiener spaces PWλ50(G), . . . ,PWλ1000(G) for top-100 ranking prediction. The interesting observation is: (1) the low-frequency functions with eigenvalues less than λ100 contribute nothing to low degree vertices; and (2) the high-frequency functions whose eigenvalues are greater than λ500 do not help to increase the performance on very-high degree vertices. This finding implies that low(high)-frequency functions reflect the user preferences on the popular(cold) items. From this viewpoint, the model defined in Eq. (7) aims to exploit the items with high clickthrough rate with high certainty, which makes sense in commercial applications.
3.4 ERROR ANALYSIS
Our GS-IMC approach defined in Eq. (7) bears a similarity to total variation denoising (Sadhanala et al., 2016; 2017), graph-constrained regularization (Belkin et al., 2004; 2006), and particularly Laplacian shrinkage methods (Huang et al., 2011). However, we argue that the proposed GS-IMC approach is fundamentally different from previous works. Specifically, they operate on real-valued data while GS-IMC deals with positive-unlabeled data. We believe that our problem setting is more complicated, since the unlabeled data is a mixture of unobserved positive examples and true negative examples. In addition, existing methods analyze the recoverability considering statistical noise to be continuous Gaussian, e.g., Theorem 3 (Sadhanala et al., 2016), Theorem 1.1 (Pesenson, 2009) etc.
However, we study upper bound of GS-IMC in the presence of discrete random label noise ξ. Specifically, Theorem 4 extends Paley-Wiener theorem of (Pesenson, 2009) on real-valued data to positiveunlabelled data, showing that a bandlimited function y can be recovered from its values on certain set Ω. Theorem 5 takes into account statistical noise ξ and shows that a bandlimited function y can be accurately reconstructed if C2n = C > 0 is a constant, not growing with n.
Theorem 4 (Error Analysis, extension of Theorem 1.1 in (Pesenson, 2009)). Given R(λ) with λ ≤ R(λ) on graph G = (V,E), assume that Ωc = V − Ω admits the Poincare inequality ‖ φ ‖≤ Λ ‖ Łφ ‖ for any φ ∈ L2(Ωc) with Λ > 0, then for any y ∈ PWω(G) with 0 < ω ≤ R(ω) < 1/Λ,
‖ y − ŷk ‖≤ 2 ( ΛR(ω) )k ‖ y ‖ and y = lim
k→∞ ŷk (8)
where k is a pre-specified hyperparameter and ŷk is the solution of Eq. (5) with = 0.
Remark. Theorem 4 indicates that a better estimate of y can be achieved by simply using a higher k, but there is a trade-off between accuracy of the estimate on one hand, and complexity and numerical stability on the other. We found by experiments that GS-IMC with k = 1 can achieve SOTA results for inductive top-N recommendation on benchmarks. We provide more discussions in Appendix G. Theorem 5 (Error Analysis, with label noise). Suppose that ξ is the random noise with flip rate ρ, and positive λ1 ≤ · · · ≤ λn are eigenvalues of Laplacian Ł, then for any function y ∈ PWω(G),
E [ MSE(y, ŷ) ] ≤ C 2 n
n ( ρ R(λ1)(1 +R(λ1)/ϕ)2 + 1 4ϕ ) , (9)
where C2n = R(ω) ‖ y ‖2, ϕ is the regularization parameter and ŷ is defined in Eq. (7).
Remark. Theorem 5 shows that for a constantC2n = C > 0 (not growing with n), the reconstruction error converges to zero as n is large enough. Also, the reconstruction error decreases with R(ω) declining which means low-frequency functions can be recovered more easily than high-frequency functions. We provide more discussions on ϕ, ρ in Appendix H.
4 BAYESIAN GS-IMC FOR ONLINE LEARNING
In general, an inductive learning approach such as GAT (Veličković et al., 2017) and SAGE (Hamilton et al., 2017), etc., can naturally cope with the online learning scenario where the prediction is refreshed given a newly observed example. Essentially, GS-IMC is an inductive learning approach that can update the prediction, more effective than previous matrix completion methods (Devooght et al., 2015; He et al., 2016). Let ∆s denote newly coming data that might be one-hot as in Fig. 3(a), ŷ denotes original prediction based on data s, then we can efficiently update ŷ to ŷnew as follows:
ŷnew = H(Ł)(s + ∆s) = ŷ +H(Ł)∆s. (10)
However, we argue that GS-IMC ingests the new data in an unrealistic, suboptimal way. Specifically, it does not take into account the model uncertainties, assuming that the observed positive data is noise-free. This assumption limits model’s fidelity and flexibility for real applications. In addition, it assigns a uniform weight to each sample, assuming that the innovation, i.e., the difference between the current a priori prediction and the current observation information, is equal for all samples.
4.1 PROBLEM FORMULATION
To model the uncertainties, we denote a measurement by z=Uŷ (Fourier basis U) which represents prediction ŷ in the graph Fourier domain and we assume that z is determined by a stochastic process.
In Fig. 3(b), measurement z is governed by hidden state x and noise ν captures the data uncertainties in an implicit manner. The choice of state transition equation need to account for two critical criteria: (1) the model uncertainties need to be considered. (2) the transition from state x to state xnew need to represent the evolution of predictions ŷ/ŷy defined in Eq. (10).
To account for this, we propose a Bayesian extension of GS-IMC, entitled BGS-IMC, which considers the stochastic filtering problem in a dynamic state-space form:
xnew = x + F∆s + η (11) znew = xnew + ν (12)
where Eq. (11) essentially follows Eq. (10) in the graph Fourier domain, i.e., multiplying both sides of Eq. (10) by U. In control theory, F = UH(Ł) is called the input matrix and ∆s represents the system input vector. The state equation (11) describes how the true state x,xnew evolves under the impact of the process noise η ∼ N (0,Ση), and the measurement equation (12) characterizes how a measurement znew = U>(s + ∆s) of the true state xnew is corrupted by the measurement noise ν ∼ N (0,Σν). It is worth noting that larger determinant of Σν means that data points are more dispersed, while for Ση large determinant implies that BGS-IMC is not sufficiently expressive and it is better to use measurement for decision making, i.e., BGS-IMC is reduced to GS-IMC.
Using Bayes rule, the posterior is given by:
p(xnew|∆s, znew) ∝ p(znew|xnew)p(xnew|∆s), (13)
where p(znew|xnew) and p(xnew|∆s) follow a Gauss-Markov process.
4.2 PREDICTION-CORRECTION UPDATE ALGORITHM
To make an accurate prediction, we propose a prediction-correction update algorithm, resembling workhorse Kalman filtering-based approaches (Kalman, 1960; Wiener et al., 1964). To our knowledge, the class of prediction-correction methods appears less studied in the domain of 1-bit matrix completion, despite its popularity in time-series forecasting (Simonetto et al., 2016; de Bézenac et al., 2020) and computer vision (Matthies et al., 1989; Scharstein & Szeliski, 2002).
In the prediction step, we follow the evolution of the state as defined in Eq. (11) to compute the mean and the covariance of conditional p(xnew|∆s):
E[xnew|∆s] = x̂ + F∆s = x̄new and Var(xnew|∆s) = P + Ση = P̄new, (14)
where x̂ is the estimate state of x and P is the estimate covariance, i.e., P= E(x − x̂)(x − x̂)>, while x̄new, P̄new are the extrapolated estimate state and covariance respectively. Meanwhile, it is easy to obtain the mean and the covariance of conditional p(znew|xnew):
E[znew|xnew] = E[xnew + ν] = xnew and Var(znew|xnew) = E[νν>] = Σν . (15)
In the correction step, we combine Eq. (13) with Eq. (14) and (15): p(xnew|∆s, znew)∝exp ( (xnew− znew)>Σ−ν (xnew− znew) + (xnew− x̄new)>P̄−new(xnew− x̄new) ) .
By solving ∂ ln p(xnew|∆s, znew)/∂xnew = 0, we have the following corrected estimate state x̂new and covariance Pnew, where we recall that the new measurement is defined as znew =U>(s + ∆s):
x̂new = x̄new + K(znew − x̄new) (16) Pnew = (I−K)P̄new(I−K)> + KΣνK> (17)
K = P̄new(P̄new + Σν) −, (18)
where K is the Kalman gain and znew − x̄new is called the innovation. It is worth noting that Eq. (16) adjusts the predicted iterate x̄new in terms of the innovation, the key difference to GS-IMC and existing methods, e.g., GAT (Veličković et al., 2017) and SAGE (Hamilton et al., 2017).
Remark. The BGS-IMC approach is highly scalable in Paley-Wiener spaces. Let PWω(G) be the span of k ( n) eigenfunctions whose eigenvalues are no greater than ω, then the transition matrix F in (11) is k-by-n and every covariance matrix is of size k× k. Computationally, when P,Ση,Σν are diagonal, it takes O(k2) time to compute x̂new and Pnew, and O(nk) time for x̄new and P̄new. The total time complexity is O(nk + k2), linear to the number of vertices n. Further, Proposition 6 shows that x̂new in (16) is an unbiased and minimum-variance estimator. Proposition 6. Given an observation ∆s, provided F is known, x̂new obtained in Eq. (16) is the optimal linear estimator in the sense that it is unbiased and minimum-variance.
To summarize, the complete procedure of BGS-IMC is to first specify Ση,Σν ,P using prior knowledge, then to calculate extrapolated state x̄new using (14), and finally to obtain x̂new using (16) so that we have the updated model prediction as ŷnew = Ux̂new that ingests the new observation.
5 EXPERIMENT
This section evaluates GS-IMC (in Section 3) and BGS-IMC (in Section 4) on real-world datasets. All the experiments are conducted on the machines with Xeon 3175X CPU, 128G memory and P40 GPU with 24 GB memory. The source code and models will be made publicly available.
5.1 EXPERIMENTAL SETUP
We adopt three large real-world datasets widely used for evaluating recommendation algorithms: (1) Koubei (1, 828, 250 ratings of 212, 831 users and 10, 213 items); (2) Tmall (7, 632, 826 ratings of 320, 497 users and 21, 876 items); (3) Netflix (100, 444, 166 ratings of 400, 498 users and 17, 770 items). For each dataset, we follow the experimental protocols in (Liang et al., 2018; Wu et al., 2017a) for inductive top-N ranking, where the users are split into training/validation/test set with ratio 8 : 1 : 1. Then, we use all the data from the training users to optimize the model parameters. In the testing phase, we sort all interactions of the validation/test users in chronological order, holding out the last one interaction for testing and inductively generating necessary representations using the rest data. The results in terms of hit-rate (HR) and normalized discounted cumulative gain (NDCG) are reported on the test set for the model which delivers the best results on the validation set.
We implement our method in Apache Spark with Intel MKL, where matrix computation is parallelized and distributed. In experiments, we denote item-user rating matrix by R and further define the Laplacian Ł = I−D−1/2v RD−e R>D −1/2 v . We set a=4, γ=1, ϕ=10 for GS-IMC, while we set the covariance to Ση=Σν=10−4I and initialize P using the validation data for BGS-IMC. In the test stage, if a user has |Ω| training interactions, BGS-IMC uses first |Ω|−1 interactions to produce initial state x̂, then feed last interaction to simulate the online update.
In the literature, there are few of existing works that enable inductive inference for topN ranking only using the ratings. To make thorough comparisons, we prefer to strengthen IDCF with GCMC for the improved performance (IDCF+ for short) rather than report the results of IDCF (Wu et al., 2021) and GCMC (van den Berg et al., 2017) as individuals. Furthermore, we study their performance with different graph neural networks including ChebyNet (Defferrard et al., 2016), GAT (Veličković et al., 2017), GraphSage (Hamilton et al., 2017), SGC (Wu et al., 2019) and ARMA (Bianchi et al., 2021). We adopt the Adam optimizer (Kingma & Ba, 2015) with the learning rate decayed by 0.98 every epoch. We search by grid the learning rate and L2 regularizer in {0.1, 0.01, . . . , 0.00001}, the dropout rate over {0.1, 0.2, . . . , 0.7} and the latent factor size ranging {32, 64, . . . , 512} for the optimal performance. In addition, we also report the results of the shallow models i.e., MRCF (Steck, 2019) and SGMC (Chen et al., 2021) which are most closely related to our proposed method. The software provided by the authors is used in the experiments.
We omit the results of Markov chain Monte Carlo based FISM (He & McAuley, 2016), variational auto-encoder based MultVAE (Liang et al., 2018), scalable Collrank (Wu et al., 2017b), graph neural networks GCMC (van den Berg et al., 2017) and NGCF (Wang et al., 2019), as their accuracies were found below on par in SGMC (Chen et al., 2021) and IDCF (Wu et al., 2021).
5.2 ACCURACY COMPARISON
In this section, GS-IMC and BGS-IMC assume that the underlying signal is λ1000-bandlimited, and we compare them with eight state-of-the-arts graph based baselines, including spatial graph models (i.e., IDCF (Wu et al., 2021), IDCF+GAT (Veličković et al., 2017), IDCF+GraphSAGE (Hamilton et al., 2017)), approximate spectral graph models with high-order polynomials (i.e., IDCF+SGC (Wu et al., 2019), IDCF+ChebyNet (Defferrard et al., 2016), IDCF+ARMA (Bianchi et al., 2021)) and exact spectral graph models (i.e., MRFCF (Steck, 2019) and SGMC (Chen et al., 2021)).
In Table 2 and Table 3, the results on the real-world Koubei, Tmall and Netflix show that BGS-IMC outperforms all the baselines on all the datasets. Note that MRFCF (Steck, 2019) is the full rank version of GS-IMC with (one-step) random walk regularization. We can see that MRFCF underperforms its counterpart on all the three datasets, which demonstrates the advantage of the bandlimited assumption for inductive top-N ranking tasks. Further, BGS-IMC consistently outperforms GS-IMC on all three datasets by margin which proves the efficacy of the prediction-correction algorithm for incremental updates. Additionally, we provide extensive ablation studies in Appendix C, scalability studies in Appendix D and more comparisons with SOTA sequential models in Appendix E.
To summarize, the reason why the proposed method can further improve the prediction accuracy is due to 1) GS-IMC exploits the structural information in the 1-bit matrix to mitigate the negative influence of discrete label noise in the graph vertex domain; and 2) BGS-IMC further improves the prediction accuracy by considering continuous Gaussian noise in the graph Fourier domain and yielding unbiased and minimum-variance predictions using prediction-correction update algorithm.
6 CONCLUSION
We have introduced a unified graph signal sampling framework for inductive 1-bit matrix completion, together with theoretical bounds and insights. Specifically, GS-IMC is devised to learn the structural information in the 1-bit matrix to mitigate the negative influence of discrete label noise in the graph vertex domain. Second, BGS-IMC takes into account the model uncertainties in the graph Fourier domain and provides a prediction-correction update algorithm to obtain the unbiased and minimum-variance reconstructions. Both GS-IMC and BGS-IMC have closed-form solutions and are highly scalable. Experiments on the task of inductive top-N ranking have shown the supremacy.
A RELATED WORK
Inductive matrix completion. There has been a flurry of research on problem of inductive matrix completion (Chiang et al., 2018; Jain & Dhillon, 2013; Xu et al., 2013; Zhong et al., 2019), which leverage side information (or content features) in the form of feature vectors to predict inductively on new rows and columns. The intuition behind this family of algorithms is to learn mappings from the feature space to the latent factor space, such that inductive matrix completion methods can adapt to new rows and columns without retraining. However, it has been recently shown (Zhang & Chen, 2020; Ledent et al., 2021; Wu et al., 2021) that inductive matrix completion methods provide limited performance due to the inferior expressiveness of the feature space. On the other hand, the prediction accuracy has strong constraints on the content quality, but in practice the high quality content is becoming hard to collect due to legal risks (Voigt & Von dem Bussche, 2017). By contrast, one advantage of our approach is the capacity of inductive learning without using side information.
Graph neural networks. Inductive representation learning over graph structured data has received significant attention recently due to its ubiquitous applicability. Among the existing works, GraphSAGE (Hamilton et al., 2017) and GAT (Veličković et al., 2017) propose to generate embeddings for previously unseen data by sampling and aggregating features from a node’s local neighbors. In the meantime, various approaches such as ChebyNet (Defferrard et al., 2016) and GCN (Kipf & Welling, 2016) exploit convolutional neural networks to capture sophisticated feature information but are generally less scalable. To address the scalability issue, Wu et al. (2019) develop simplified graph convolutional networks (SGCN) which utilize polynomial filters to simulate the stacked graph convolutional layers. Furthermore, Bianchi et al. (2021) extend auto-regressive moving average (ARMA) filters to convolutional layers for broader frequency responses.
To leverage recent advance in graph neural networks, lightGCN (He et al., 2020), GCMC (van den Berg et al., 2017) and PinSAGE (Ying et al., 2018) represent the matrix by a bipartite graph then generalize the representations to unseen nodes by summing the content-based embeddings over the neighbors. Differently, IGMC (Zhang & Chen, 2020) trains graph neural networks which encode the subgraphs around an edge into latent factors then decode the factors back to the value on the edge. Recently, IDCF (Wu et al., 2021) studies the problem in a downsampled homogeneous graph (i.e., user-user graph in recommender systems) then applies attention networks to yield inductive representations. Probably most closely related to our approach are IDCF (Wu et al., 2021) and IGMC (Zhang & Chen, 2020) which do not assume any side information, such as user profiles and item properties. The key advantage of our approach is not only the closed form solution for efficient GNNs training, but also the theoretical results which guarantee the reconstruction of unseen rows and columns and the practical guidance for potential improvements.
Graph signal sampling. In general, graph signal sampling aims to reconstruct real-valued functions defined on the vertices (i.e., graph signals) from their values on certain subset of vertices. Existing approaches commonly build upon the assumption of bandlimitedness, by which the signal of interest lies in the span of leading eigenfunctions of the graph Laplacian (Pesenson, 2000; 2008). It is worth noting that we are not the first to consider the connections between graph signal sampling and matrix completion, as recent work by Romero et al. (Romero et al., 2016) has proposed a unifying kernel based framework to broaden both of graph signal sampling and matrix completion perspectives. However, we argue that Romero’s work and its successors (Benzi et al., 2016; Mao et al., 2018; McNeil et al., 2021) are orthogonal to our approach as they mainly focus on real-valued matrix completion in the transductive manner. Specifically, our approach concerns two challenging problems when connecting the ideas and methods of graph signal sampling with inductive one-bit matrix completion — one-bit quantization and online learning.
To satisfy the requirement of online learning, existing works learn the parameters for new rows and columns by performing either stochastic gradient descent used in MCEX (Giménez-Febrer et al., 2019), or alternating least squares used in eALS (He et al., 2016). The advantage of BGS-IMC is three fold: (i) BGS-IMC has closed form solutions, bypassing the well-known difficulty for tuning
learning rate; and (ii) BGS-IMC considers the random Gaussian noise in the graph Fourier domain, characterizing the uncertainties in the measurement and modeling; (iii) prediction-correction algorithm, resembling Kalman filtering, can provide unbiased and minimum-variance reconstructions.
Probably most closely related to our approach are SGMC (Chen et al., 2021) and MRFCF (Steck, 2019) in the sense that both of them formulate their solutions as spectral graph filters and can be regarded as methods for data filtering in domains of discrete signal processing. More specifically, SGMC optimizes latent factors V,U by minimizing the normalized matrix reconstruction error:
min U,V ‖ D−1/2v RD−1/2e −VU ‖, s.t. ‖ U ‖≤ , ‖ V ‖≤ η, (19)
while MRFCF minimizes the following matrix reconstruction error:
min X ‖ R−XR ‖ +λ ‖ X ‖ s.t. diag(X) = 0, (20)
where the diagonal entries of parameter X is forced to zero. It is obvious now that both SGMC and MRFCF focus on minimizing the matrix reconstruction problem. This is one of the key differences to our graph signal sampling framework which optimizes the functional minimization problem as defined in Eq. 5. We argue that our problem formulation is more suitable for the problem of inductive one-bit matrix completion, since it focuses on the reconstruction of bandlimited functions, no matter if the function is observed in the training or at test time. Perhaps more importantly, both of methods (Chen et al., 2021; Steck, 2019) can be included as special cases of our framework. We believe that a unified framework cross graph signal sampling and inductive matrix completion could benefit both fields, since the modeling knowledge from both domains can be more deeply shared.
Advantages of graph signal sampling perspectives. A graph signal sampling perspective requires to model 1-bit matrix data as signals on a graph and formulate the objective in the functional space. Doing so opens the possibility of processing, filtering and analyzing the matrix data with vertexfrequency analysis (Hammond et al., 2011; Shuman et al., 2013), time-variant analysis (Mao et al., 2018; McNeil et al., 2021), smoothing and filtering (Kalman, 1960; Khan & Moura, 2008) etc. In this paper, we technically explore the use of graph spectral filters to inductively recover the missing values of matrix, Kalman-filtering based approach to deal with the streaming data in online learning scenario, and vertex-frequency analysis to discover the advantages of dynamic BERT4REC model over static BGS-IMC model. We believe that our graph signal sampling framework can serve as a new paradigm for 1-bit matrix completion, especially in large-scale and dynamic systems.
B GENERALIZING SGMC AND MRFCF
This section shows how GS-IMC generalizes SGMC (Chen et al., 2021) and MRFCF (Steck, 2019).
GS-IMC generalizes SGMC. Given the observation R, we follow standard routine of hypergraph (Zhou et al., 2007) to calculate the hypergraph Laplacian matrix Ł = I −D−1/2v RD−e R>D −1/2 v , where Dv (De) is the diagonal degree matrix of vertices (edges). Then the rank-k approximation (see Eq. (9) in (Chen et al., 2021)) is equivalent to our result using bandlimited norm R(λ) = 1 if λ ≤ λk and R(λ) =∞ otherwise,
ŷ = (∑
l
( 1 +R(λl)/ϕ ) ulu > l )− s = ∑ l≤k ulu > l s = UkU > k s
where we set ϕ = ∞ and limϕ→∞R(λ)/ϕ = ∞ for λ > λk, and matrix Uk comprises k leading eigenvectors whose eigenvalues are less than or equal to λk.
GS-IMC generalizes MRFCF. Given R, we simply adopt the correlation relationship to construct the affinity matrix and define the Laplacian as Ł = 2I − D−1/2v RR>D−1/2v . Then the matrix approximation (see Eq. (4) in (Steck, 2019)) is equivalent to our GS-IMC approach using one-step
random walk norm,
ŷ = (∑
l
( 1 + 1
a− λ
) ulu > l )− s
= ∑ l ( 1− 1 a− λ+ 1 ) ulu > l s
= { I− ( (a+ 1)I− Ł )−} s
= { I− (
(a− 1)I + D1/2v RR >D1/2v
)−} s
where we set ϕ = 1 and a ≥ λmax is a pre-specified parameter for the random walk regularization.
C ABLATION STUDIES
This study evaluates how GS-IMC and BGS-IMC perform with different choice of the regularization function and the graph definition. In the following, we assume the underlying signal to recover is in the Paley-Wiener space PWλ1000(G), and hence we only take the first 1000 eigenfunctions whose eigenvalues are not greater than λ1000 to make predictions.
C.1 IMPACT OF REGULARIZATION FUNCTIONS
Table 4 and 5 show that for the proposed GS-IMC models, Tikhonov regularization produces the best HR and NDCG results on both Koubei and Netflix, while Diffusion process regularization performs the best on Tmall. Meanwhile, BGS-IMC with random walk regularization achieves the best HR and NDCG results on Koubei, while Tikhonov regularization and Diffusion process regularization are best on Tmall and Netflix. Perhaps more importantly, BGS-IMC consistently outperforms GS-IMC on all three datasets by margin which proves the efficacy of the prediction-correction algorithm.
We highlight the reason why BGS-IMC can further improve the performance of GS-IMC is due to the fact that BGS-IMC considers Gaussian noise in the Fourier domain and the prediction-correction update algorithm is capable of providing unbiased and minimum-variance predictions.
C.2 IMPACT OF GRAPH DEFINITIONS
Table 6 present the HR and NDCG results of GS-IMC with one-step random walk regularization on the Netflix prize data. To avoid the clutter, we omit the results of GS-IMC with other regularization functions, since their results share the same trends. It seems that the regular graph that use covariance matrix as the affinity matrix has better HR and NDCG results when recommending 10 and 50 items, while the hypergraph helps achieve better results when recommending 100 items.
D SCALABILITY STUDIES
The solution for either GS-IMC or BGS-IMC requires to compute leading eigenvetors whose eigenvalues are less than or equal to pre-specified ω. However, one might argue that it is computationally intractable on the industry-scale datasets. To address the concerns, one feasible approach is to perform the Nyström (Fowlkes et al., 2004) method to obtain the leading eigenvectors. For the completeness of the paper, we present the pseudo-code of the approximate eigendecomposition (Chen et al., 2021) in Algorithm 1, of which the computational complexity is O(lnk + k3) where n is the number of columns in Ł, l is the number of sampled columns and k is the number of eigenvectors to compute. This reduces the overhead from O(n3) to O(lnk + k3), linear to the number of rows. To evaluate how the proposed GS-IMC and BGS-IMC methods perform with the approximate eigenvectors, we conduct the experiments on the largest Netflix prize data. Table 7 reports the HR, NDCG and runtime results for the standard GS-IMC and BGS-IMC methods, and their scalable versions entitled GS-IMCs and BGS-IMCs. To make the comparison complete, we also present the results of neural IDCF (Wu et al., 2021) model equipped with ChebyNet (Defferrard et al., 2016). It is obvious that the standard GS-IMC and BGS-IMC methods consume only a small fraction of training time, required by graph neural networks. Meanwhile, GS-IMCs achieves comparable ranking
Algorithm 1 Approximate Eigendecomposition Require: n × l matrix C derived from l columns sampled from n × n kernel matrix L without
replacement, l × l matrix A composed of the intersection of these l columns, l × l matrix W, rank k, the oversampling parameter p and the number of power iterations q.
Ensure: approximate eigenvalues Σ̃ and eigenvectors Ũ. 1: Generate a random Gaussian matrix Ω ∈ Rl×(k+p), then compute the sample matrix AqΩ. 2: Perform QR-Decomposition on AqΩ to obtain an orthonormal matrix Q that satisfies the equa-
tion AqΩ = QQ>AqΩ, then solve ZQ>Ω = Q>WΩ. 3: Compute the eigenvalue decomposition on the (k + p)-by-(k + p) matrix Z, i.e., Z =
UZΣZUZ >, to obtain UW = QUZ [:, : k] and ΣW = ΣZ [: k, : k].
4: Return Σ̃← ΣW , Ũ← CA−1/2UWΣ−1/2W .
performance to GS-IMC, while improving the efficiency by 8X. Likewise, BGS-IMCs enjoys the improvement in the system scalability without significant loss in prediction accuracy. The overall results demonstrate that GS-IMC and BGS-IMC are highly scalable in very large data.
E SPECTRUM ANALYSIS AND DISCUSSION WITH SEQUENTIAL MODELS
We compare BGS-IMC with recent sequential recommendation models, including Transformer-based SASREC (Kang & McAuley, 2018), BERT-based BERT4REC (Sun et al., 2019) and causal CNN based GREC (Yuan et al., 2020). We choose the embedding size of 256 and search the optimal hyper-parameters by grid. Each model is configured using the
same parameters provided by the original paper i.e., two attention blocks with one head for SASREC, three attention blocks with eight heads for BERT4REC and six dilated CNNs with degrees 1, 2, 2, 4, 4, 8 for GREC.
Table 8 presents HR and NDCG results on Koubei for inductive top-N ranking. Note that BGS-IMC only accepts the most recent behavior to update the obsolete state for incremental learning, whereas SASREC, BERT4REC and GREC focus on modeling the dynamic patterns in the sequence. Hence, such a comparison is not in favor of BGS-IMC. Interestingly, we see that static BGS-IMC achieves comparable HR results to SOTA sequential models, while consuming a small fraction of running time. From this viewpoint, BGS-IMC is more cost-effective than the compared methods.
To fully understand the performance gap in NDCG, we analyze GS-IMC, BGS-IMC and the best baseline BERT4REC in the graph spectral domain, where we limit the `2 norm of each user’s spectral signals to one and visualize their averaged values in Figure 4. As expected, the energy of GS-IMC and BGS-IMC is concentrated on the low frequencies, since the high-frequency functions are highly penalized during minimization. Furthermore, the proposed prediction-correction update algorithm increases the energy of high-frequency functions. This bears a similarity with BERT4REC of which high-frequency functions are not constrained and can aggressively raise the rankings of unpopular items. This explains why BERT4REC and BGS-IMC have better NDCGs than GS-IMC.
F LIMITATION AND FUTURE WORK
Limitation on sequence modeling. The proposed BGS-IMC method is simple and cannot capture the sophisticated dynamics in the sequence. However, we believe that our work opens the possibility of benefiting sequential recommendation with graph signal processing techniques, for example extended Kalman filter, KalmanNet and Particle filter.
Limitation on sample complexity. The sample complexity is not provided in the paper, and we believe that this is an open problem due to the lack of regularity in the graph which prevent us from defining the idea of sampling “every other node” (the reader is referred to (Anis et al., 2016; Ortega et al., 2018) for more details).
Future work on deep graph learning. Though GS-IMC and BGS-IMC are mainly compared with neural graph models, we note that our approach can help improve the performance of existing graph neural networks including GAT (Veličković et al., 2017) and SAGE (Hamilton et al., 2017), etc. We summarize the following directions for future works: 1) It is interesting to see how GS-IMC takes advantage of content features. One feasible idea is to use GS-IMC as multi-scale wavelets which
can be easily adapted to graph neural networks; 2) BGS-IMC can also be utilized to optimize the aggregation module for the improved robustness, as every neighbor’s representation can be viewed as a measurement of the query node’s representation.
G PROOF OF THEOREM 4
Proof. This proof is analogous to Theorem 1.1 in (Pesenson, 2009), where we extend their results from Sobolev norm to a broader class of positive, monotonically increasing functionals.
Proof of the first part of the Theorem 4.
Suppose that the Laplacian operator Ł has bounded inverse and the fitting error = 0, if y ∈ PWω(G) and ŷk interpolate y on a set Ω = V − Ωc and Ωc admits the Poincare inequality ‖ φ ‖≤ Λ ‖ Łφ ‖ for any φ ∈ L2(Ωc). Then y − ŷk ∈ L2(Ωc) and we have
‖y − ŷk‖ ≤ Λ‖Ł(y − ŷk)‖.
At this point, we can apply Lemma 7 with Λ = a and φ = y− ŷk. It gives the following inequality
‖ y − ŷk ‖≤ Λk ‖ Łk(y − ŷk) ‖
for all k = 2l, l = 0, 1, 2, . . . Since R(λ) is positive and monotonically increasing function, it gives
Λk ‖ Łk(y − ŷk) ‖≤ Λk ‖ R(Ł)k(y − ŷk) ‖ .
Because the interpolant ŷk minimize the norm ‖ R(Ł)k · ‖, we have
‖ R(Ł)k(y − ŷk) ‖≤‖ R(Ł)ky ‖ + ‖ R(Ł)kŷk ‖≤ 2 ‖ R(Ł)ky ‖ .
As for functions y ∈ PWω(G) ⊂ PWR(ω)(G) the Bernstein inequality in Lemma 8 holds
‖ R(Ł)ky ‖≤ R(ω)k ‖ y ‖, k ∈ N.
Putting everything together, we conclude the first part of Theorem 4: ‖ y − ŷk ‖≤ 2 ( ΛR(ω) )k ‖ y ‖,ΛR(ω) < 1, k = 2l, l ∈ N (21)
Proof of the second part of the Theorem 4.
Since ΛR(ω) < 1 holds, it gives the following limit
lim k→∞ (ΛR(ω))k = 0 and lim k→∞ ‖ y − ŷk ‖≤ 0
With the non-negativity of the norm, we have
‖y − ŷk‖ ≥ 0. (22)
This implies the second part of the Theorem 4:
y = lim k→∞ ỹk. (23)
Lemma 7 (restated from Lemma 4.1 in (Pesenson, 2009)). Suppose that Ł is a bounded selfadjoint positive definite operator in a Hilbert space L2(G), and ‖ φ ‖≤ a ‖ Łφ ‖ holds true for any φ ∈ L2(G) and a positive scalar a > 0, then for all k = 2l, l = 0, 1, . . . , the following inequality holds true
‖ φ‖ ≤ ak‖Łkφ ‖ . (24)
Lemma 8 (restated from Theorem 2.1 in (Pesenson, 2008)). A function f ∈ L2(G) belongs to PWω(G) if and only if the following Bernstein inequality holds true for all s ∈ R+
‖ Łsy ‖≤ ωs ‖ y ‖ . (25)
G.1 EXTRA DISCUSSION
In (Pesenson, 2008), the complementary set S = Ωc = V − Ω which admits Poincare inequality is called the Λ-set. Theorem 4 in our paper and Theorem 1.1 in (Pesenson, 2009) state that bandlimited functions y ∈ PWω can be reconstructed from their values on a uniqueness set Ω = V −S. To better understand the concept of Λ-set, we restate Lemma 9 from (Pesenson, 2008) which presents the conditions for Λ-set. It is worth pointing out that (i) the second condition suggests that the vertices from Λ-set would likely be sparsely connected with the uniqueness set Ω; and (ii) the vertices in Λ-set are disconnected with each other or isolated in the subgraph constructed by the vertices S, otherwise there always exists a non-zero function φ ∈ L2(S), ‖ φ ‖6= 0 which makes ‖ Łφ ‖= 0. Lemma 9 (restated from Lemma 3.6 in (Pesenson, 2008)). Suppose that for a set of vertices S ⊂ V (finite or infinite) the following holds true:
1. every point from S is adjacent to a point from the boundary bS, the set of all vertices in V which are not in S but adjacent to a vertex in S;
2. for every v ∈ S there exists at least one adjacent point uv ∈ bS whose adjacency set intersects S only over v;
3. the number Λ = supv∈s d(v) is finite;
Then the set S is a Λ-set which admits the Poincare inequality ‖ φ ‖≤ Λ ‖ Łφ ‖, φ ∈ L2(S). (26)
In our experiments for recommender systems, each user’s ratings might not comply with Poincare inequality. This is because there exists some users who prefer niche products/movies (low-degree nodes). As shown in Fig. 2, user preferences on low-degree nodes are determined by high-frequency functions. When R(ω) is not large enough, Poincare inequality does not hold for such users. This also explains why our model performs poorly for cold items.
Regarding to choice of parameter k, empirical results show that using k ≥ 2 does not help improve the performance, and note that when k is large enough, all kernels will be reduced to bandlimited norm, i.e., R(λ) = 1 if λ ≤ λk ≤ 1, since the gap between eigenvalues shrinks.
H PROOF OF THEOREM 5
Proof. Let ξ denote the random label noise which flips a 1 to 0 with rate ρ, assume that the sample s = y + ξ is observed from y under noise ξ, then for a graph spectral filter Hϕ = (I +R(Ł)/ϕ)−1 with positive ϕ > 0, we have
E [ MSE(y, ŷ) ] = 1
n E ‖ y −Hϕ(y + ξ) ‖2
≤ 1 n E ‖ Hϕξ ‖2 + 1 n ‖ (I−Hϕ)y ‖2, (27)
where the last inequality holds due to the triangular property of matrix norm.
To bound E ‖ Hϕξ ‖2, let Cn = R1/2(ω) ‖ y ‖, then
E ‖ Hϕξ ‖2 (a) = ∑ y(v)=1 ρ(Hϕ,(∗,v) ×−1)2 + (1− ρ)(Hϕ,(∗,v) × 0)2
= ρ ∑
y(v)=1
(Hϕ,(∗,v)y(v)) 2 = ρ ‖ Hϕy ‖2
(b) ≤ sup ‖R1/2(Ł)y‖≤Cn ρ ‖ Hϕy ‖2= sup ‖z‖≤Cn ρ ‖ HϕR−1/2(Ł)z ‖2
= ρC2nσ 2 max ( HϕR −1/2(Ł) )
= ρC2n max l=1,...,n
1 (1 +R(λl)/ϕ)2 1 R(λl)
≤ ρϕ 2C2n
R(λ1)(ϕ+R(λ1))2 , (28)
where (a) follows the definition of the flip random noise ξ and (b) holds to the fact that y is in the Paley-Wiener space PWω(G). As for the second term,
‖ (I−Hϕ)y ‖2 ≤ sup ‖R1/2(Ł)y‖≤Cn ‖ (I−Hϕ)y ‖2
(a) = sup ‖z‖≤Cn ‖ (I−Hϕ)R−1/2(Ł)z ‖2
= C2nσ 2 max ( (I−Hϕ)R−1/2(Ł) ) = C2n max
l=1,...,n
( 1− 1
1 +R(λl)/ϕ )2 1 R(λl)
= C2n ϕ max l=1,...,n
R(λl)/ϕ
(R(λl)/ϕ+ 1)2
(b) ≤ C 2 n
4ϕ . (29)
where (a) holds due to the fact that the eigenvectors of I−Hϕ are the eigenvectors of R(Ł); and (b) follows the simple upper bound x/(1 + x)2 ≤ 1/4 for x ≥ 0. By combing everything together, we conclude the result
E [ MSE(y, ŷ) ] ≤ C 2 n
n ( ρϕ2 R(λ1)(ϕ+R(λ1))2 + 1 4ϕ ) . (30)
H.1 EXTRA DISCUSSION
Choosing ϕ to balance the two terms on the right-hand side above gives ϕ∗ = ∞ for ρ < 1/8 and 1 +R(λ1)/ϕ ∗ = 2ρ1/3 for ρ ≥ 1/8. Plugging in this choice, we have the upper bound if ρ ≥ 18
E [ MSE(y, ŷ) ] ≤ C 2 n
4R(λ1)n (3ρ1/3 − 1), (31)
and if ρ < 18 , then the upper bound is
E [ MSE(y, ŷ) ] ≤ C 2 nρ
4R(λ1)n . (32)
This result implies that we can use a large ϕ to obtain accurate reconstruction when the flip rate ρ is not greater than 1/8, and ϕ need to be carefully tuned when the flip rate ρ is greater than 1/8.
I PROOF OF PROPOSITION 6
As below we present the proof in a Bayesian framework, and the reader is referred to (Maybeck, 1982) for a geometrical interpretation of Monte Carlo estimate statistics.
Proof of the minimal variance
To minimize the estimate variance, we need to minimize the main diagonal of the covariance Pnew: trace ( Pnew ) = trace ( (I−K)P̄new(I−K)> + KΣµK> ) .
Then, we differentiate the trace of Pnew with respect to K d trace ( Pnew ) d K = trace ( 2KP̄new − 2P̄new ) + trace ( 2KΣu ) .
The optimal K which minimizes the variance should satisfy d trace(Pnew)/d K = 0, then it gives
K(I + P̄new) = P̄new.
This implies that the variance of estimate x̂new is minimized when K = P̄new(I + P̄new)−.
Proof of the unbiasedness
Suppose that the obsolete estimate x̂ is unbiased, i.e. Ex̂ = x, then using Eq. (11) we have
E ( x̄new ) = E ( x̂ + F∆s ) = x + F∆s = xnew.
Because of Eq. (12) and the measurement noise ν has zero mean, it gives E ( znew ) = E ( xnew + ν ) = xnew.
Putting everything together, we conclude the following result E ( x̂new ) = E ( x̄new + K(znew − x̄new) ) = xnew + K(xnew − xnew) = xnew. (33)
This implies that the estimate state x̂new is unbiased.
J IMPLEMENTATION DETAILS
In this section, we present the details for our implementation in Section 5 including the additional dataset details, evaluation protocols, model architectures in order for reproducibility. All the experiments are conducted on the machines with Xeon 3175X CPU, 128G memory and P40 GPU with 24 GB memory. The configurations of our environments and packages are listed below:
• Ubuntu 16.04 • CUDA 10.2 • Python 3.7 • Tensorflow 1.15.3 • Pytorch 1.10 • DGL 0.7.1 • NumPy 1.19.0 with MKL Intel
J.1 ADDITIONAL DATASET DETAILS
We use three real-world datasets which are processed in line with (Liang et al., 2018; Steck, 2019): (1) for Koubei2, we keep users with at least 5 records and items that have been purchased by at least 100 users; and (2) for Tmall3, we keep users who click at least 10 items and items which have been seen by at least 200 users; and (3) for Netflix4, we keep all of the users and items. In addition, we chose the random seed as 9876 when splitting the users into training/validation/test sets.
2https://tianchi.aliyun.com/dataset/dataDetail?dataId=53 3https://tianchi.aliyun.com/dataset/dataDetail?dataId=35680 4https://kaggle.com/netflix-inc/netflix-prize-data
J.2 EVALUATION PROTOCOLS
In Figure 5, we illustrate the difference between the transductive ranking and inductive ranking evaluation protocols. In the transductive ranking problem, the model performance is evaluated on the users already known during the model training, whereas the model performance is evaluated on the unseen users in the inductive ranking problems. It is worth noting that in the testing phrase, we sort all interactions of the validation/test users in chronological order, holding out the last one interaction for testing and inductively generating necessary representations on the rest data. In a nutshell, we evaluate our approach and the baselines for the challenging inductive next-item prediction problem.
J.3 EVALUATION METRICS
We adopt hit-rate (HR) and normalized discounted cumulative gain (NDCG) to evaluate the model performance. Suppose that the model provideN recommended items for user u asRu, let Tu denote the interacted items of the user, then HR is computed as follows:
HR@N = Eu 1|Tu∩Ru| (34)
where 1|Ω| is equal to 1 if set Ω is not empty and is equal to 0 otherwise. NDCG evaluates ranking performance by taking the positions of correct items into consideration:
NDCG@N = 1
Z DCG@N =
1
Z N∑ j=1 21|R j u∩Tu| − 1 log2(j + 1) (35)
where Z is the normalized constant that represents the maximum values of DCG@N for Tu.
J.4 GRAPH LAPLACIAN
Let R denote the item-user rating matrix, Dv and De denotes the diagonal degree matrix of vertices and edges respectively, then graph Laplacian matrix used in our experiments is defined as follows:
Ł = I−D−1/2v RD−e R>D−1/2v . (36)
where I is identity matrix.
J.5 DISCUSSION ON PREDICTION FUNCTIONS
In experiments, we focus on making personalized recommendations to the users, so that we are interested in the ranks of the items for each user. Specifically, for top-k ranking problem we choose the items with the k-largest predicted ratings,
Recommendation@k = max |O|=k ∑ v∈O,v/∈Ω+ y(v). (37)
More importantly, our proposed method is also suitable for the link prediction problem, where the goal is classify whether an edge between two vertices exists or not. This can be done by choosing a splitting point to partition the candidate edges into two parts. There are many different ways of choosing such splitting point. One can select the optimal splitting point based on the ROC or AUC results on the validation set.
J.6 MODEL ARCHITECTURES
As mentioned before, we equip IDCF (Wu et al., 2021) with different GNN architectures as the backbone. Here we introduce the details for them.
GAT. We use the GATConv layer available in DGL for implementation. The detailed architecture description is as below:
• A sequence of one-layer GATConv with four heads. • Add self-loop and use batch normalization for graph convolution in each layer.
• Use tanh as the activation. • Use inner product between user embedding and item embedding as ranking score.
GraphSAGE. We use the SAGEConv layer available in DGL for implementation. The detailed architecture description is as below:
• A sequence of two-layer SAGEConv. • Add self-loop and use batch normalization for graph convolution in each layer. • Use ReLU as the activation. • Use inner product between user embedding and item embedding as ranking score.
SGC. We use the SGConv layer available in DGL for implementation. The detailed architecture description is as below:
• One-layer SGConv with two hops. • Add self-loop and use batch normalization for graph convolution in each layer. • Use ReLU as the activation. • Use inner product between user embedding and item embedding as ranking score.
ChebyNet. We use the ChebConv layer available in DGL for implementation. The detailed architecture description is as below:
• One-layer ChebConv with two hops. • Add self-loop and use batch normalization for graph convolution in each layer. • Use ReLU as the activation. • Use inner product between user embedding and item embedding as ranking score.
ARMA. We use the ARMAConv layer available in DGL for implementation. The detailed architecture description is as below:
• One-layer ARMAConv with two hops. • Add self-loop and use batch normalization for graph convolution in each layer. • Use tanh as the activation. • Use inner product between user embedding and item embedding as ranking score.
We also summarize the implementation details of the compared sequential baselines as follows.
SASREC.5 We use the software provided by the authors for experiments. The detailed architecture description is as below:
• A sequence of two-block Transformer with one head. • Use maximum sequence length to 30. • Use inner product between user embedding and item embedding as ranking score.
BERT4REC.6 We use the software provided by the authors for experiments. The detailed architecture description is as below:
• A sequence of three-block Transformer with eight heads. • Use maximum sequence length to 30 with the masked probability 0.2. • Use inner product between user embedding and item embedding as ranking score.
5https://github.com/kang205/SASRec 6https://github.com/FeiSun/BERT4Rec
GREC.7 We use the software provided by the authors for experiments. The detailed architecture description is as below:
• A sequence of six-layer dilated CNN with degree 1, 2, 2, 4, 4, 8. • Use maximum sequence length to 30 with the masked probability 0.2. • Use inner product between user embedding and item embedding as ranking score.
7https://github.com/fajieyuan/WWW2020-grec | 1. What is the focus of the paper regarding one-bit matrix completion?
2. What are the strengths and weaknesses of the proposed unified graph signal sampling framework?
3. How do the conditions in Theorem 3 and 4 relate to the paper's contributions?
4. Are there any concerns about the clarity and quality of the writing in the paper?
5. How does the reviewer assess the novelty and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper studies the problem of one-bit matrix completion. The paper claims to propose a unified graph signal sampling framework that enjoys the benefits of graph signal analysis and processing. The authors provide some theorems related to the quality of reconstruction.
Strengths And Weaknesses
Due to either: the lack of my knowledge in this area, or: the writing of the paper, I cannot understand what problem this paper tries to solve. I understand what one-bit matrix completion is, but I do not understand, at least from the first three Sections, how it is "unified" with graph signals or graph Laplacians. This is reflected in my confidence rating.
See Clarity, Quality, Novelty And Reproducibility for details.
How should readers interpret the conditions in Theorem 3 and 4? Can the authors provide some examples in which they hold? Do these conditions hold in the experiments?
Clarity, Quality, Novelty And Reproducibility
It is hard to follow even the first paragraph. This is hurting the readability of the whole paper.
Why M has an extra column? What is the relationship between M and y?
Since s is just a noisy version of y by flipping digits, is it necessary to define \xi in this manner, instead os saying something simple as "s_i = y_i with probability 1-\rho, and 1 - y_i with probability \rho"? As a reference, [1] defines the same mechanism in a much clearer way.
Despite of saying "It is obvious now that the problem of inductive 1-bit matrix completion is equivalent to recovering clean y from corrupted s", I don't see the formal definition of the problem. What is the relationship between y and M, R, \Phi? It should be self-contained for readers.
References:
[1] Davenport, Mark A., et al. "1-bit matrix completion." Information and Inference: A Journal of the IMA 3.3 (2014): 189-223. |
ICLR | Title
Graph Signal Sampling for Inductive One-Bit Matrix Completion: a Closed-form Solution
Abstract
Inductive one-bit matrix completion is motivated by modern applications such as recommender systems, where new users would appear at test stage with the ratings consisting of only ones and no zeros. We propose a unified graph signal sampling framework which enjoys the benefits of graph signal analysis and processing. The key idea is to transform each user’s ratings on the items to a function (graph signal) on the vertices of an item-item graph, then learn structural graph properties to recover the function from its values on certain vertices — the problem of graph signal sampling. We propose a class of regularization functionals that takes into account discrete random label noise in the graph vertex domain, then develop the GS-IMC approach which biases the reconstruction towards functions that vary little between adjacent vertices for noise reduction. Theoretical result shows that accurate reconstructions can be achieved under mild conditions. For the online setting, we develop a Bayesian extension, i.e., BGS-IMC which considers continuous random Gaussian noise in the graph Fourier domain and builds upon a predictioncorrection update algorithm to obtain the unbiased and minimum-variance reconstruction. Both GS-IMC and BGS-IMC have closed-form solutions and thus are highly scalable in large data as verified on public benchmarks.
1 INTRODUCTION
In domains such as recommender systems and social networks, only “likes” (i.e., ones) are observed in the system and service providers (e.g, Netflix) are interested in discovering potential “likes” for existing users to stimulate demand. This motivates the problem of 1-bit matrix completion (OBMC), of which the goal is to recover missing values in an n-by-m item-user matrix R∈ {0, 1}n×m. We note that Ri,j = 1 means that item i is rated by user j, but Ri,j = 0 is essentially unlabeled or unknown which is a mixture of unobserved positive examples and true negative examples.
However, in real world new users, who are not exposed to the model during training, may appear at testing stage. This fact stimulates the development of inductive 1-bit matrix completion, which aims to recover unseen vector y ∈ {0, 1}n from its partial positive entries Ω+ ⊆ {j|yj = 1} at test time. Fig. 1(a) emphasizes the difference between conventional and inductive approaches. More formally, let M∈{0, 1}n×(m+1) denote the underlying matrix, where only a subset of positive examples Ψ is randomly sampled from {(i, j)|Mi,j=1, i≤n, j≤m} such that Ri,j=1 for (i, j)∈Ψ and Ri,j=0 otherwise. Consider (m+1)-th column y out of matrix R, we likewise denote its observations si=1 for i ∈ Ω+ and si=0 otherwise. We note that the sampling process here assumes that there exists a random label noise ξ which flips a 1 to 0 with probability ρ, or equivalently s = y + ξ where
ξi = −1 for i ∈ {j|yj = 1} − Ω+, and ξi = 0 otherwise. (1) Fig. 1(a) presents an example of s,y, ξ to better understand their relationships.
Fundamentally, the reconstruction of true y from corrupted s bears a resemblance with graph signal sampling. Fig. 1(b) shows that the item-user rating matrix R can be used to define a homogeneous
∗Junchi Yan is the correspondence author who is also with Shanghai AI Laboratory. The work was in part supported by NSFC (62222607), STCSM (22511105100).
item-item graph (see Sec 3.1), such that user ratings y/s on items can be regarded as signals residing on graph nodes. The reconstruction of bandlimited graph signals from certain subsets of vertices (see Sec 2) has been extensively studied in graph signal sampling (Pesenson, 2000; 2008).
Despite popularity in areas such as image processing (Shuman et al., 2013; Pang & Cheung, 2017; Cheung et al., 2018) and matrix completion (Romero et al., 2016; Mao et al., 2018; McNeil et al., 2021), graph signal sampling appears less studied in the specific inductive one bit matrix completion problem focused in this paper (see Appendix A for detailed related works). Probably most closely related to our approach are MRFCF (Steck, 2019) and SGMC (Chen et al., 2021) which formulate their solutions as spectral graph filters. However, we argue that these methods are orthogonal to us since they focus on optimizing the rank minimization problem, whereas we optimize the functional minimization problem, thereby making it more convinient and straightforward to process and analyze the matrix data with vertex-frequency analysis (Hammond et al., 2011; Shuman et al., 2013), time-variant analysis (Mao et al., 2018; McNeil et al., 2021), smoothing and filtering (Kalman, 1960; Khan & Moura, 2008). Furthermore, (Steck, 2019; Chen et al., 2021) can be incorporated as special cases of our unified graph signal sampling framework (see Appendix B for detailed discussions).
Another emerging line of research has focused on learning the mapping from side information (or content features) to latent factors (Jain & Dhillon, 2013; Xu et al., 2013; Ying et al., 2018; Zhong et al., 2019). However, it has been recently shown (Zhang & Chen, 2020; Ledent et al., 2021; Wu et al., 2021) that in general this family of algorithms would possibly suffer inferior expressiveness when high-quality content is not available. Further, collecting personal data is likely to be unlawful as well as a breach of the data minimization principle in GDPR (Voigt & Von dem Bussche, 2017).
Much effort has also been made to leverage the advanced graph neural networks (GNN) for improvements. van den Berg et al. (2017) represent the data matrix R by a bipartite graph then generalize the representations to unseen nodes by summing the embeddings over the neighbors. Zhang & Chen (2020) develop graph neural networks which encode the subgraphs around an edge into latent factors then decode the factors back to the value on the edge. Besides, Wu et al. (2021) consider the problem in a downsampled homogeneous graph (i.e., user-user graph in recommender systems) then exploit attention networks to yield inductive representations. The key advantage of our approach is not only the closed form solution which takes a small fraction of training time required for GNNs, but also theory results that guarantee accurate reconstruction and provide guidance for practical applications.
We emphasize the challenges when connecting ideas and methods of graph signal sampling with inductive 1-bit matrix completion — 1-bit quantization and online learning. Specifically, 1-bit quantization raises challenges for formulating the underlying optimization problems: minimizing squared loss on the observed positive examples Ω+ yields a degenerate solution — the vector with all entries equal to one achieves zero loss; minimizing squared loss on the corrupted data s introduces the systematic error due to the random label noise ξ in Eq. (1). To address the issue, we represent the observed data R as a homogeneous graph, then devise a broader class of regularization functionals on graphs to mitigate the impact of discrete random noise ξ. Existing theory for total variation denoising (Sadhanala et al., 2016; 2017) and graph regularization (Belkin et al., 2004; Huang et al., 2011), which takes into account continuous Gaussian noise, does not sufficiently address recoverability in inductive 1-bit matrix completion (see Sec 3.4). We finally mange to derive a closed-form solution, entitled Graph Sampling for Inductive (1-bit) Matrix Completion GS-IMC which biases the reconstruction towards functions that vary little between adjacent vertices for noise reduction.
For online learning, existing matrix factorization methods (Devooght et al., 2015; Volkovs & Yu, 2015; He et al., 2016) incrementally update model parameters via gradient descent, requiring an expensive line search to set the best learning rate. To scale up to large data, we develop a Bayesian extension called BGS-IMC where a prediction-correction algorithm is devised to instantly refreshes the prediction given new incoming data. The prediction step tracks the evolution of the optimization problem such that the predicted iterate does not drift away from the optimum, while the correction step adjusts for the distance between current prediction and the new information at each step. The advantage over baselines is that BGS-IMC considers the uncertainties in the graph Fourier domain, and the prediction-correction algorithm can efficiently provide the unbiased and minimum-variance predictions in closed form, without using gradient descent techniques. The contributions are:
• New Inductive 1-bit Matrix Completion Framework. We propose and technically manage (for the first time to our best knowledge) to introduce graph signal sampling to inductive 1-bit matrix completion. It opens the possibility of benefiting the analysis and processing of the matrix with signal processing toolbox including vertex-frequency analysis (Hammond et al., 2011; Shuman et al., 2013), time-variant analysis (Mao et al., 2018; McNeil et al., 2021), smoothing and filtering (Kalman, 1960; Khan & Moura, 2008) etc. We believe that our unified framework can serve as a new paradigm for 1-bit matrix completion, especially in large-scale and dynamic systems. • Generalized Closed-form Solution. We derive a novel closed-form solution (i.e., GS-IMC) in the graph signal sampling framework, which incorporates existing closed-form solutions as special cases, e.g., (Chen et al., 2021; Steck, 2019). GS-IMC is learned from only positive data with discrete random noise. This is one of key differences to typical denoising methods (Sadhanala et al., 2016) where efforts are spent on removing continuous Gaussian noise from a real-valued signal. • Robustness Enhancement. We consider the online learning scenario and construct a Bayesian extension, i.e., BGS-IMC where a new prediction-correction algorithm is proposed to instantly yield unbiased and minimum-variance predictions given new incoming data. Experiments in Appendix E show that BGS-IMC is more cost-effective than many neural models such as SASREC (Kang & McAuley, 2018), BERT4REC (Sun et al., 2019) and GREC (Yuan et al., 2020). We believe that this proves a potential for the future application of graph signal sampling to sequential recommendation. • Theoretical Guarantee and Empirical Effectiveness. We extend Paley-Wiener theorem of (Pesenson, 2009) on real-valued data to positive-unlabelled data with statistical noise. The theory shows that under mild conditions, unseen rows and columns in training can be recovered from a certain subset of their values that is present at test time. Empirical results on real-world data show that our methods achieve state-of-the-art performance for the challenging inductive Top-N ranking tasks.
2 PRELIMINARIES
In this section, we introduce the notions and provide the necessary background of graph sampling theory. Let G = (V,E,w) denote a weighted, undirected and connected graph, where V is a set of vertices with |V | = n, E is a set of edges formed by the pairs of vertices and the positive weight w(u, v) on each edge is a function of the similarity between vertices u and v.
Space L2(G) is the Hilbert space of all real-valued functions f : V → R with the following norm: ‖ f ‖= √∑ v∈V |f(v)|2, (2)
and the discrete Laplace operator Ł is defined by the formula (Chung & Graham, 1997):
Łf(v) = 1√ d(v) ∑ u∈N (v) w(u, v) ( f(v)√ d(v) − f(u)√ d(u) ) , f ∈ L2(G)
where N (v) signifies the neighborhood of node v and d(v)= ∑ u∈N (v)w(u, v) is the degree of v.
Definition 1 (Graph Fourier Transform). Given a function or signal f in L2(G), the graph Fourier transform and its inverse (Shuman et al., 2013) can be defined as follows:
f̃G = U >f and f = Uf̃ , (3)
where U represents eigenfunctions of discrete Laplace operator Ł, f̃G denotes the signal in the graph Fourier domain and f̃G(λl)=〈f ,ul〉 signifies the information at the frequency λl1. Definition 2 (Bandlimiteness). f ∈L2(G) is called ω-bandlimited function if its Fourier transform f̃G has support in [0, ω], and ω-bandlimited functions form the Paley-Wiener space PWω(G). Definition 3 (Graph Signal Sampling). Given y ∈ PWω(G), y can be recovered from its values on the vertices Ω+ by minimizing below objective (Pesenson, 2000; 2008), with positive scalar k:
min f∈L2(G)
‖ Łkf ‖ s.t., f(v) = y(v), ∀v ∈ Ω+. (4)
Recall that the observation in inductive 1-bit matrix completion consists of only ones and no zeros (i.e., y(v) = 1 for v ∈ Ω+) and ‖ Łk1 ‖= 0. It is obvious that minimizing the loss on the observed entries corresponding to ones, produces a degenerate solution — the vector with all entries equal to one achieves zero loss. From this point of view, existing theory for sampling real-valued signals (Pesenson, 2000; 2008) is not well suited to the inductive 1-bit matrix completion problem.
3 CLOSED-FORM SOLUTION FOR 1-BIT MATRIX COMPLETION
This section builds a unified graph signal sampling framework for inductive 1-bit matrix completion that can inductively recover y from positive ones on set Ω+. The rational behind our framework is that the rows that have similar observations are likely to have similar reconstructions. This makes a lot of sense in practice, for example a user (column) is likely to give similar items (rows) similar scores in recommender systems. To achieve this, we need to construct a homogeneous graph G where the connected vertices represent the rows which have similar observations, so that we can design a class of graph regularized functionals that encourage adjacent vertices on graph G to have similar reconstructed values. In particular, we mange to provide a closed-form solution to the matrix completion problem (entitled GS-IMC), together with theoretical bounds and insights.
3.1 GRAPH DEFINITION
We begin with the introduction of two different kinds of methods to construct homogeneous graphs by using the zero-one matrix R ∈ Rn×m: (i) following the definition of hypergraphs (Zhou et al., 2007), matrix R can be regarded as the incidence matrix, so as to formulate the hypergraph Laplacian matrix as Ł = I − D−1/2v RD−e R>D −1/2 v where Dv ∈ Rn×n (De ∈ Rm×m) is the diagonal degree matrix of vertices (edges); and (ii) for regular graphs, one of the most popular approaches is to utilize the covariance between rows to form the adjacent matrix Ai,j = Cov(Ri,Rj) for i 6= j so that we can define the graph Laplacian matrix as Ł = I−D−1/2v AD−1/2v .
3.2 GRAPH SIGNAL SAMPLING FRAMEWORK
Given a graph G = (V,E), any real-valued column y ∈ Rn can be viewed as a function on G that maps from V to R, and specifically the i-th vector component yi is equivalent to the function value y(i) at the i-th vertex. Now it is obvious that the problem of inductive matrix completion, of which the goal is to recover column y from its values on entries Ω+, bears a resemblance to the problem of graph signal sampling that aims to recover function y from its values on vertices Ω+.
However, most of existing graph signal sampling methods (Romero et al., 2016; Mao et al., 2018; McNeil et al., 2021) yield degenerated solutions when applying them to the 1-bit matrix completion problem. A popular heuristic is to treat some or all of zeros as negative examples Ω−, then to recover y by optimizing the following functional minimization problem, given any k = 2l, l ∈ N:
min f∈L2(G)
‖ [R(Ł)]kf ‖ s.t., ‖ sΩ − fΩ ‖≤ (5)
1To be consistent with (Shuman et al., 2013), ul (l-th column of matrix U) is the l-th eigenvector associated with the eigenvalue λl, and the graph Laplacian eigenvalues carry a notion of frequency.
where recall that s = y + ξ is the observed data corrupted by discrete random noise ξ, and sΩ (fΩ) signifies the values of s (f ) only on Ω = Ω+ ∪Ω−; R(Ł) = ∑ lR(λl)ulu > l denotes the regularized Laplace operator in which {λl} and {ul} are respectively the eigenvalues and eigenfunctions of operator Ł. It is worth noting that s(i) = y(i) + ξ(i) = 0 for i ∈ Ω− is not the true negative data, and hence Ω− will introduce the systematic bias when there exists i ∈ Ω− so that y(i) = 1. The choice of regularization function R(λ) needs to account for two critical criteria: 1) The resulting regularization operator R(Ł) needs to be semi-positive definite. 2) As mentioned before, we expect the reconstruction ŷ to have similar values on adjacent nodes, so that the uneven functions should be penalized more than even functions. To account for this, we adopt the family of positive, monotonically increasing functions (Smola & Kondor, 2003) as present in Table 1.
To the end, we summarize two natural questions concerning our framework: 1) What are the benefits from introducing the regularized Laplacian penalty? It is obvious that minimizing the discrepancy between sΩ and fΩ does not provide the generalization ability to recover unknown values on the rest vertices V − Ω, and Theorem 4 and 5 answer the question by examining the error bounds. 2) What kind of R(Ł) constitutes a reasonable choice? It has been studied in (Huang et al., 2011) that R(Ł) is most appropriate if it is unbiased, and an unbiased R(Ł) reduces variance without incurring any bias on the estimator. We also highlight the empirical study in Appendix C that evaluates how the performance is affected by the definition of graph G and regularization function R(λ).
3.3 CLOSED-FORM SOLUTION
In what follows, we aim to provide a closed-form solution for our unified framework by treating all of the zeros as negative examples, i.e., s(v) = 1 for v ∈ Ω+ and s(v) = 0 otherwise. Then by using the method of Lagrange multipliers, we reformulate Eq. (5) to the following problem:
min f∈L2(G)
1 2 〈f , R(Ł)f〉+ ϕ 2 ‖s− f‖2 , (6)
where ϕ > 0 is a hyperparameter. Obviously, this problem has a closed-form solution:
ŷ = ( I +R(Ł)/ϕ )− s = (∑ l ( 1 +R(λl)/ϕ ) ulu > l )− s = H(Ł)s, (7)
whereH(Ł) = ∑ lH(λl)ulu > l with kernel 1/H(λl) = 1+R(λ)/ϕ, and we exemplifyH(λ) when ϕ = 1 in Table 1. From the viewpoint of spectral graph theory, our GS-IMC approach is essentially a spectral graph filter that amplifies(attenuates) the contributions of low(high)-frequency functions.
Remark. To understand low-frequency and high-frequency functions, Figure 2 presents case studies in the context of recommender systems on the Netflix prize data (Bennett et al., 2007). Specifically, we divide the vertices (items) into four classes: very-high degree (> 5000), high degree (> 2000), medium degree (> 100) and low degree vertices. Then, we report the recall results of all the four classes in different Paley-Wiener spaces PWλ50(G), . . . ,PWλ1000(G) for top-100 ranking prediction. The interesting observation is: (1) the low-frequency functions with eigenvalues less than λ100 contribute nothing to low degree vertices; and (2) the high-frequency functions whose eigenvalues are greater than λ500 do not help to increase the performance on very-high degree vertices. This finding implies that low(high)-frequency functions reflect the user preferences on the popular(cold) items. From this viewpoint, the model defined in Eq. (7) aims to exploit the items with high clickthrough rate with high certainty, which makes sense in commercial applications.
3.4 ERROR ANALYSIS
Our GS-IMC approach defined in Eq. (7) bears a similarity to total variation denoising (Sadhanala et al., 2016; 2017), graph-constrained regularization (Belkin et al., 2004; 2006), and particularly Laplacian shrinkage methods (Huang et al., 2011). However, we argue that the proposed GS-IMC approach is fundamentally different from previous works. Specifically, they operate on real-valued data while GS-IMC deals with positive-unlabeled data. We believe that our problem setting is more complicated, since the unlabeled data is a mixture of unobserved positive examples and true negative examples. In addition, existing methods analyze the recoverability considering statistical noise to be continuous Gaussian, e.g., Theorem 3 (Sadhanala et al., 2016), Theorem 1.1 (Pesenson, 2009) etc.
However, we study upper bound of GS-IMC in the presence of discrete random label noise ξ. Specifically, Theorem 4 extends Paley-Wiener theorem of (Pesenson, 2009) on real-valued data to positiveunlabelled data, showing that a bandlimited function y can be recovered from its values on certain set Ω. Theorem 5 takes into account statistical noise ξ and shows that a bandlimited function y can be accurately reconstructed if C2n = C > 0 is a constant, not growing with n.
Theorem 4 (Error Analysis, extension of Theorem 1.1 in (Pesenson, 2009)). Given R(λ) with λ ≤ R(λ) on graph G = (V,E), assume that Ωc = V − Ω admits the Poincare inequality ‖ φ ‖≤ Λ ‖ Łφ ‖ for any φ ∈ L2(Ωc) with Λ > 0, then for any y ∈ PWω(G) with 0 < ω ≤ R(ω) < 1/Λ,
‖ y − ŷk ‖≤ 2 ( ΛR(ω) )k ‖ y ‖ and y = lim
k→∞ ŷk (8)
where k is a pre-specified hyperparameter and ŷk is the solution of Eq. (5) with = 0.
Remark. Theorem 4 indicates that a better estimate of y can be achieved by simply using a higher k, but there is a trade-off between accuracy of the estimate on one hand, and complexity and numerical stability on the other. We found by experiments that GS-IMC with k = 1 can achieve SOTA results for inductive top-N recommendation on benchmarks. We provide more discussions in Appendix G. Theorem 5 (Error Analysis, with label noise). Suppose that ξ is the random noise with flip rate ρ, and positive λ1 ≤ · · · ≤ λn are eigenvalues of Laplacian Ł, then for any function y ∈ PWω(G),
E [ MSE(y, ŷ) ] ≤ C 2 n
n ( ρ R(λ1)(1 +R(λ1)/ϕ)2 + 1 4ϕ ) , (9)
where C2n = R(ω) ‖ y ‖2, ϕ is the regularization parameter and ŷ is defined in Eq. (7).
Remark. Theorem 5 shows that for a constantC2n = C > 0 (not growing with n), the reconstruction error converges to zero as n is large enough. Also, the reconstruction error decreases with R(ω) declining which means low-frequency functions can be recovered more easily than high-frequency functions. We provide more discussions on ϕ, ρ in Appendix H.
4 BAYESIAN GS-IMC FOR ONLINE LEARNING
In general, an inductive learning approach such as GAT (Veličković et al., 2017) and SAGE (Hamilton et al., 2017), etc., can naturally cope with the online learning scenario where the prediction is refreshed given a newly observed example. Essentially, GS-IMC is an inductive learning approach that can update the prediction, more effective than previous matrix completion methods (Devooght et al., 2015; He et al., 2016). Let ∆s denote newly coming data that might be one-hot as in Fig. 3(a), ŷ denotes original prediction based on data s, then we can efficiently update ŷ to ŷnew as follows:
ŷnew = H(Ł)(s + ∆s) = ŷ +H(Ł)∆s. (10)
However, we argue that GS-IMC ingests the new data in an unrealistic, suboptimal way. Specifically, it does not take into account the model uncertainties, assuming that the observed positive data is noise-free. This assumption limits model’s fidelity and flexibility for real applications. In addition, it assigns a uniform weight to each sample, assuming that the innovation, i.e., the difference between the current a priori prediction and the current observation information, is equal for all samples.
4.1 PROBLEM FORMULATION
To model the uncertainties, we denote a measurement by z=Uŷ (Fourier basis U) which represents prediction ŷ in the graph Fourier domain and we assume that z is determined by a stochastic process.
In Fig. 3(b), measurement z is governed by hidden state x and noise ν captures the data uncertainties in an implicit manner. The choice of state transition equation need to account for two critical criteria: (1) the model uncertainties need to be considered. (2) the transition from state x to state xnew need to represent the evolution of predictions ŷ/ŷy defined in Eq. (10).
To account for this, we propose a Bayesian extension of GS-IMC, entitled BGS-IMC, which considers the stochastic filtering problem in a dynamic state-space form:
xnew = x + F∆s + η (11) znew = xnew + ν (12)
where Eq. (11) essentially follows Eq. (10) in the graph Fourier domain, i.e., multiplying both sides of Eq. (10) by U. In control theory, F = UH(Ł) is called the input matrix and ∆s represents the system input vector. The state equation (11) describes how the true state x,xnew evolves under the impact of the process noise η ∼ N (0,Ση), and the measurement equation (12) characterizes how a measurement znew = U>(s + ∆s) of the true state xnew is corrupted by the measurement noise ν ∼ N (0,Σν). It is worth noting that larger determinant of Σν means that data points are more dispersed, while for Ση large determinant implies that BGS-IMC is not sufficiently expressive and it is better to use measurement for decision making, i.e., BGS-IMC is reduced to GS-IMC.
Using Bayes rule, the posterior is given by:
p(xnew|∆s, znew) ∝ p(znew|xnew)p(xnew|∆s), (13)
where p(znew|xnew) and p(xnew|∆s) follow a Gauss-Markov process.
4.2 PREDICTION-CORRECTION UPDATE ALGORITHM
To make an accurate prediction, we propose a prediction-correction update algorithm, resembling workhorse Kalman filtering-based approaches (Kalman, 1960; Wiener et al., 1964). To our knowledge, the class of prediction-correction methods appears less studied in the domain of 1-bit matrix completion, despite its popularity in time-series forecasting (Simonetto et al., 2016; de Bézenac et al., 2020) and computer vision (Matthies et al., 1989; Scharstein & Szeliski, 2002).
In the prediction step, we follow the evolution of the state as defined in Eq. (11) to compute the mean and the covariance of conditional p(xnew|∆s):
E[xnew|∆s] = x̂ + F∆s = x̄new and Var(xnew|∆s) = P + Ση = P̄new, (14)
where x̂ is the estimate state of x and P is the estimate covariance, i.e., P= E(x − x̂)(x − x̂)>, while x̄new, P̄new are the extrapolated estimate state and covariance respectively. Meanwhile, it is easy to obtain the mean and the covariance of conditional p(znew|xnew):
E[znew|xnew] = E[xnew + ν] = xnew and Var(znew|xnew) = E[νν>] = Σν . (15)
In the correction step, we combine Eq. (13) with Eq. (14) and (15): p(xnew|∆s, znew)∝exp ( (xnew− znew)>Σ−ν (xnew− znew) + (xnew− x̄new)>P̄−new(xnew− x̄new) ) .
By solving ∂ ln p(xnew|∆s, znew)/∂xnew = 0, we have the following corrected estimate state x̂new and covariance Pnew, where we recall that the new measurement is defined as znew =U>(s + ∆s):
x̂new = x̄new + K(znew − x̄new) (16) Pnew = (I−K)P̄new(I−K)> + KΣνK> (17)
K = P̄new(P̄new + Σν) −, (18)
where K is the Kalman gain and znew − x̄new is called the innovation. It is worth noting that Eq. (16) adjusts the predicted iterate x̄new in terms of the innovation, the key difference to GS-IMC and existing methods, e.g., GAT (Veličković et al., 2017) and SAGE (Hamilton et al., 2017).
Remark. The BGS-IMC approach is highly scalable in Paley-Wiener spaces. Let PWω(G) be the span of k ( n) eigenfunctions whose eigenvalues are no greater than ω, then the transition matrix F in (11) is k-by-n and every covariance matrix is of size k× k. Computationally, when P,Ση,Σν are diagonal, it takes O(k2) time to compute x̂new and Pnew, and O(nk) time for x̄new and P̄new. The total time complexity is O(nk + k2), linear to the number of vertices n. Further, Proposition 6 shows that x̂new in (16) is an unbiased and minimum-variance estimator. Proposition 6. Given an observation ∆s, provided F is known, x̂new obtained in Eq. (16) is the optimal linear estimator in the sense that it is unbiased and minimum-variance.
To summarize, the complete procedure of BGS-IMC is to first specify Ση,Σν ,P using prior knowledge, then to calculate extrapolated state x̄new using (14), and finally to obtain x̂new using (16) so that we have the updated model prediction as ŷnew = Ux̂new that ingests the new observation.
5 EXPERIMENT
This section evaluates GS-IMC (in Section 3) and BGS-IMC (in Section 4) on real-world datasets. All the experiments are conducted on the machines with Xeon 3175X CPU, 128G memory and P40 GPU with 24 GB memory. The source code and models will be made publicly available.
5.1 EXPERIMENTAL SETUP
We adopt three large real-world datasets widely used for evaluating recommendation algorithms: (1) Koubei (1, 828, 250 ratings of 212, 831 users and 10, 213 items); (2) Tmall (7, 632, 826 ratings of 320, 497 users and 21, 876 items); (3) Netflix (100, 444, 166 ratings of 400, 498 users and 17, 770 items). For each dataset, we follow the experimental protocols in (Liang et al., 2018; Wu et al., 2017a) for inductive top-N ranking, where the users are split into training/validation/test set with ratio 8 : 1 : 1. Then, we use all the data from the training users to optimize the model parameters. In the testing phase, we sort all interactions of the validation/test users in chronological order, holding out the last one interaction for testing and inductively generating necessary representations using the rest data. The results in terms of hit-rate (HR) and normalized discounted cumulative gain (NDCG) are reported on the test set for the model which delivers the best results on the validation set.
We implement our method in Apache Spark with Intel MKL, where matrix computation is parallelized and distributed. In experiments, we denote item-user rating matrix by R and further define the Laplacian Ł = I−D−1/2v RD−e R>D −1/2 v . We set a=4, γ=1, ϕ=10 for GS-IMC, while we set the covariance to Ση=Σν=10−4I and initialize P using the validation data for BGS-IMC. In the test stage, if a user has |Ω| training interactions, BGS-IMC uses first |Ω|−1 interactions to produce initial state x̂, then feed last interaction to simulate the online update.
In the literature, there are few of existing works that enable inductive inference for topN ranking only using the ratings. To make thorough comparisons, we prefer to strengthen IDCF with GCMC for the improved performance (IDCF+ for short) rather than report the results of IDCF (Wu et al., 2021) and GCMC (van den Berg et al., 2017) as individuals. Furthermore, we study their performance with different graph neural networks including ChebyNet (Defferrard et al., 2016), GAT (Veličković et al., 2017), GraphSage (Hamilton et al., 2017), SGC (Wu et al., 2019) and ARMA (Bianchi et al., 2021). We adopt the Adam optimizer (Kingma & Ba, 2015) with the learning rate decayed by 0.98 every epoch. We search by grid the learning rate and L2 regularizer in {0.1, 0.01, . . . , 0.00001}, the dropout rate over {0.1, 0.2, . . . , 0.7} and the latent factor size ranging {32, 64, . . . , 512} for the optimal performance. In addition, we also report the results of the shallow models i.e., MRCF (Steck, 2019) and SGMC (Chen et al., 2021) which are most closely related to our proposed method. The software provided by the authors is used in the experiments.
We omit the results of Markov chain Monte Carlo based FISM (He & McAuley, 2016), variational auto-encoder based MultVAE (Liang et al., 2018), scalable Collrank (Wu et al., 2017b), graph neural networks GCMC (van den Berg et al., 2017) and NGCF (Wang et al., 2019), as their accuracies were found below on par in SGMC (Chen et al., 2021) and IDCF (Wu et al., 2021).
5.2 ACCURACY COMPARISON
In this section, GS-IMC and BGS-IMC assume that the underlying signal is λ1000-bandlimited, and we compare them with eight state-of-the-arts graph based baselines, including spatial graph models (i.e., IDCF (Wu et al., 2021), IDCF+GAT (Veličković et al., 2017), IDCF+GraphSAGE (Hamilton et al., 2017)), approximate spectral graph models with high-order polynomials (i.e., IDCF+SGC (Wu et al., 2019), IDCF+ChebyNet (Defferrard et al., 2016), IDCF+ARMA (Bianchi et al., 2021)) and exact spectral graph models (i.e., MRFCF (Steck, 2019) and SGMC (Chen et al., 2021)).
In Table 2 and Table 3, the results on the real-world Koubei, Tmall and Netflix show that BGS-IMC outperforms all the baselines on all the datasets. Note that MRFCF (Steck, 2019) is the full rank version of GS-IMC with (one-step) random walk regularization. We can see that MRFCF underperforms its counterpart on all the three datasets, which demonstrates the advantage of the bandlimited assumption for inductive top-N ranking tasks. Further, BGS-IMC consistently outperforms GS-IMC on all three datasets by margin which proves the efficacy of the prediction-correction algorithm for incremental updates. Additionally, we provide extensive ablation studies in Appendix C, scalability studies in Appendix D and more comparisons with SOTA sequential models in Appendix E.
To summarize, the reason why the proposed method can further improve the prediction accuracy is due to 1) GS-IMC exploits the structural information in the 1-bit matrix to mitigate the negative influence of discrete label noise in the graph vertex domain; and 2) BGS-IMC further improves the prediction accuracy by considering continuous Gaussian noise in the graph Fourier domain and yielding unbiased and minimum-variance predictions using prediction-correction update algorithm.
6 CONCLUSION
We have introduced a unified graph signal sampling framework for inductive 1-bit matrix completion, together with theoretical bounds and insights. Specifically, GS-IMC is devised to learn the structural information in the 1-bit matrix to mitigate the negative influence of discrete label noise in the graph vertex domain. Second, BGS-IMC takes into account the model uncertainties in the graph Fourier domain and provides a prediction-correction update algorithm to obtain the unbiased and minimum-variance reconstructions. Both GS-IMC and BGS-IMC have closed-form solutions and are highly scalable. Experiments on the task of inductive top-N ranking have shown the supremacy.
A RELATED WORK
Inductive matrix completion. There has been a flurry of research on problem of inductive matrix completion (Chiang et al., 2018; Jain & Dhillon, 2013; Xu et al., 2013; Zhong et al., 2019), which leverage side information (or content features) in the form of feature vectors to predict inductively on new rows and columns. The intuition behind this family of algorithms is to learn mappings from the feature space to the latent factor space, such that inductive matrix completion methods can adapt to new rows and columns without retraining. However, it has been recently shown (Zhang & Chen, 2020; Ledent et al., 2021; Wu et al., 2021) that inductive matrix completion methods provide limited performance due to the inferior expressiveness of the feature space. On the other hand, the prediction accuracy has strong constraints on the content quality, but in practice the high quality content is becoming hard to collect due to legal risks (Voigt & Von dem Bussche, 2017). By contrast, one advantage of our approach is the capacity of inductive learning without using side information.
Graph neural networks. Inductive representation learning over graph structured data has received significant attention recently due to its ubiquitous applicability. Among the existing works, GraphSAGE (Hamilton et al., 2017) and GAT (Veličković et al., 2017) propose to generate embeddings for previously unseen data by sampling and aggregating features from a node’s local neighbors. In the meantime, various approaches such as ChebyNet (Defferrard et al., 2016) and GCN (Kipf & Welling, 2016) exploit convolutional neural networks to capture sophisticated feature information but are generally less scalable. To address the scalability issue, Wu et al. (2019) develop simplified graph convolutional networks (SGCN) which utilize polynomial filters to simulate the stacked graph convolutional layers. Furthermore, Bianchi et al. (2021) extend auto-regressive moving average (ARMA) filters to convolutional layers for broader frequency responses.
To leverage recent advance in graph neural networks, lightGCN (He et al., 2020), GCMC (van den Berg et al., 2017) and PinSAGE (Ying et al., 2018) represent the matrix by a bipartite graph then generalize the representations to unseen nodes by summing the content-based embeddings over the neighbors. Differently, IGMC (Zhang & Chen, 2020) trains graph neural networks which encode the subgraphs around an edge into latent factors then decode the factors back to the value on the edge. Recently, IDCF (Wu et al., 2021) studies the problem in a downsampled homogeneous graph (i.e., user-user graph in recommender systems) then applies attention networks to yield inductive representations. Probably most closely related to our approach are IDCF (Wu et al., 2021) and IGMC (Zhang & Chen, 2020) which do not assume any side information, such as user profiles and item properties. The key advantage of our approach is not only the closed form solution for efficient GNNs training, but also the theoretical results which guarantee the reconstruction of unseen rows and columns and the practical guidance for potential improvements.
Graph signal sampling. In general, graph signal sampling aims to reconstruct real-valued functions defined on the vertices (i.e., graph signals) from their values on certain subset of vertices. Existing approaches commonly build upon the assumption of bandlimitedness, by which the signal of interest lies in the span of leading eigenfunctions of the graph Laplacian (Pesenson, 2000; 2008). It is worth noting that we are not the first to consider the connections between graph signal sampling and matrix completion, as recent work by Romero et al. (Romero et al., 2016) has proposed a unifying kernel based framework to broaden both of graph signal sampling and matrix completion perspectives. However, we argue that Romero’s work and its successors (Benzi et al., 2016; Mao et al., 2018; McNeil et al., 2021) are orthogonal to our approach as they mainly focus on real-valued matrix completion in the transductive manner. Specifically, our approach concerns two challenging problems when connecting the ideas and methods of graph signal sampling with inductive one-bit matrix completion — one-bit quantization and online learning.
To satisfy the requirement of online learning, existing works learn the parameters for new rows and columns by performing either stochastic gradient descent used in MCEX (Giménez-Febrer et al., 2019), or alternating least squares used in eALS (He et al., 2016). The advantage of BGS-IMC is three fold: (i) BGS-IMC has closed form solutions, bypassing the well-known difficulty for tuning
learning rate; and (ii) BGS-IMC considers the random Gaussian noise in the graph Fourier domain, characterizing the uncertainties in the measurement and modeling; (iii) prediction-correction algorithm, resembling Kalman filtering, can provide unbiased and minimum-variance reconstructions.
Probably most closely related to our approach are SGMC (Chen et al., 2021) and MRFCF (Steck, 2019) in the sense that both of them formulate their solutions as spectral graph filters and can be regarded as methods for data filtering in domains of discrete signal processing. More specifically, SGMC optimizes latent factors V,U by minimizing the normalized matrix reconstruction error:
min U,V ‖ D−1/2v RD−1/2e −VU ‖, s.t. ‖ U ‖≤ , ‖ V ‖≤ η, (19)
while MRFCF minimizes the following matrix reconstruction error:
min X ‖ R−XR ‖ +λ ‖ X ‖ s.t. diag(X) = 0, (20)
where the diagonal entries of parameter X is forced to zero. It is obvious now that both SGMC and MRFCF focus on minimizing the matrix reconstruction problem. This is one of the key differences to our graph signal sampling framework which optimizes the functional minimization problem as defined in Eq. 5. We argue that our problem formulation is more suitable for the problem of inductive one-bit matrix completion, since it focuses on the reconstruction of bandlimited functions, no matter if the function is observed in the training or at test time. Perhaps more importantly, both of methods (Chen et al., 2021; Steck, 2019) can be included as special cases of our framework. We believe that a unified framework cross graph signal sampling and inductive matrix completion could benefit both fields, since the modeling knowledge from both domains can be more deeply shared.
Advantages of graph signal sampling perspectives. A graph signal sampling perspective requires to model 1-bit matrix data as signals on a graph and formulate the objective in the functional space. Doing so opens the possibility of processing, filtering and analyzing the matrix data with vertexfrequency analysis (Hammond et al., 2011; Shuman et al., 2013), time-variant analysis (Mao et al., 2018; McNeil et al., 2021), smoothing and filtering (Kalman, 1960; Khan & Moura, 2008) etc. In this paper, we technically explore the use of graph spectral filters to inductively recover the missing values of matrix, Kalman-filtering based approach to deal with the streaming data in online learning scenario, and vertex-frequency analysis to discover the advantages of dynamic BERT4REC model over static BGS-IMC model. We believe that our graph signal sampling framework can serve as a new paradigm for 1-bit matrix completion, especially in large-scale and dynamic systems.
B GENERALIZING SGMC AND MRFCF
This section shows how GS-IMC generalizes SGMC (Chen et al., 2021) and MRFCF (Steck, 2019).
GS-IMC generalizes SGMC. Given the observation R, we follow standard routine of hypergraph (Zhou et al., 2007) to calculate the hypergraph Laplacian matrix Ł = I −D−1/2v RD−e R>D −1/2 v , where Dv (De) is the diagonal degree matrix of vertices (edges). Then the rank-k approximation (see Eq. (9) in (Chen et al., 2021)) is equivalent to our result using bandlimited norm R(λ) = 1 if λ ≤ λk and R(λ) =∞ otherwise,
ŷ = (∑
l
( 1 +R(λl)/ϕ ) ulu > l )− s = ∑ l≤k ulu > l s = UkU > k s
where we set ϕ = ∞ and limϕ→∞R(λ)/ϕ = ∞ for λ > λk, and matrix Uk comprises k leading eigenvectors whose eigenvalues are less than or equal to λk.
GS-IMC generalizes MRFCF. Given R, we simply adopt the correlation relationship to construct the affinity matrix and define the Laplacian as Ł = 2I − D−1/2v RR>D−1/2v . Then the matrix approximation (see Eq. (4) in (Steck, 2019)) is equivalent to our GS-IMC approach using one-step
random walk norm,
ŷ = (∑
l
( 1 + 1
a− λ
) ulu > l )− s
= ∑ l ( 1− 1 a− λ+ 1 ) ulu > l s
= { I− ( (a+ 1)I− Ł )−} s
= { I− (
(a− 1)I + D1/2v RR >D1/2v
)−} s
where we set ϕ = 1 and a ≥ λmax is a pre-specified parameter for the random walk regularization.
C ABLATION STUDIES
This study evaluates how GS-IMC and BGS-IMC perform with different choice of the regularization function and the graph definition. In the following, we assume the underlying signal to recover is in the Paley-Wiener space PWλ1000(G), and hence we only take the first 1000 eigenfunctions whose eigenvalues are not greater than λ1000 to make predictions.
C.1 IMPACT OF REGULARIZATION FUNCTIONS
Table 4 and 5 show that for the proposed GS-IMC models, Tikhonov regularization produces the best HR and NDCG results on both Koubei and Netflix, while Diffusion process regularization performs the best on Tmall. Meanwhile, BGS-IMC with random walk regularization achieves the best HR and NDCG results on Koubei, while Tikhonov regularization and Diffusion process regularization are best on Tmall and Netflix. Perhaps more importantly, BGS-IMC consistently outperforms GS-IMC on all three datasets by margin which proves the efficacy of the prediction-correction algorithm.
We highlight the reason why BGS-IMC can further improve the performance of GS-IMC is due to the fact that BGS-IMC considers Gaussian noise in the Fourier domain and the prediction-correction update algorithm is capable of providing unbiased and minimum-variance predictions.
C.2 IMPACT OF GRAPH DEFINITIONS
Table 6 present the HR and NDCG results of GS-IMC with one-step random walk regularization on the Netflix prize data. To avoid the clutter, we omit the results of GS-IMC with other regularization functions, since their results share the same trends. It seems that the regular graph that use covariance matrix as the affinity matrix has better HR and NDCG results when recommending 10 and 50 items, while the hypergraph helps achieve better results when recommending 100 items.
D SCALABILITY STUDIES
The solution for either GS-IMC or BGS-IMC requires to compute leading eigenvetors whose eigenvalues are less than or equal to pre-specified ω. However, one might argue that it is computationally intractable on the industry-scale datasets. To address the concerns, one feasible approach is to perform the Nyström (Fowlkes et al., 2004) method to obtain the leading eigenvectors. For the completeness of the paper, we present the pseudo-code of the approximate eigendecomposition (Chen et al., 2021) in Algorithm 1, of which the computational complexity is O(lnk + k3) where n is the number of columns in Ł, l is the number of sampled columns and k is the number of eigenvectors to compute. This reduces the overhead from O(n3) to O(lnk + k3), linear to the number of rows. To evaluate how the proposed GS-IMC and BGS-IMC methods perform with the approximate eigenvectors, we conduct the experiments on the largest Netflix prize data. Table 7 reports the HR, NDCG and runtime results for the standard GS-IMC and BGS-IMC methods, and their scalable versions entitled GS-IMCs and BGS-IMCs. To make the comparison complete, we also present the results of neural IDCF (Wu et al., 2021) model equipped with ChebyNet (Defferrard et al., 2016). It is obvious that the standard GS-IMC and BGS-IMC methods consume only a small fraction of training time, required by graph neural networks. Meanwhile, GS-IMCs achieves comparable ranking
Algorithm 1 Approximate Eigendecomposition Require: n × l matrix C derived from l columns sampled from n × n kernel matrix L without
replacement, l × l matrix A composed of the intersection of these l columns, l × l matrix W, rank k, the oversampling parameter p and the number of power iterations q.
Ensure: approximate eigenvalues Σ̃ and eigenvectors Ũ. 1: Generate a random Gaussian matrix Ω ∈ Rl×(k+p), then compute the sample matrix AqΩ. 2: Perform QR-Decomposition on AqΩ to obtain an orthonormal matrix Q that satisfies the equa-
tion AqΩ = QQ>AqΩ, then solve ZQ>Ω = Q>WΩ. 3: Compute the eigenvalue decomposition on the (k + p)-by-(k + p) matrix Z, i.e., Z =
UZΣZUZ >, to obtain UW = QUZ [:, : k] and ΣW = ΣZ [: k, : k].
4: Return Σ̃← ΣW , Ũ← CA−1/2UWΣ−1/2W .
performance to GS-IMC, while improving the efficiency by 8X. Likewise, BGS-IMCs enjoys the improvement in the system scalability without significant loss in prediction accuracy. The overall results demonstrate that GS-IMC and BGS-IMC are highly scalable in very large data.
E SPECTRUM ANALYSIS AND DISCUSSION WITH SEQUENTIAL MODELS
We compare BGS-IMC with recent sequential recommendation models, including Transformer-based SASREC (Kang & McAuley, 2018), BERT-based BERT4REC (Sun et al., 2019) and causal CNN based GREC (Yuan et al., 2020). We choose the embedding size of 256 and search the optimal hyper-parameters by grid. Each model is configured using the
same parameters provided by the original paper i.e., two attention blocks with one head for SASREC, three attention blocks with eight heads for BERT4REC and six dilated CNNs with degrees 1, 2, 2, 4, 4, 8 for GREC.
Table 8 presents HR and NDCG results on Koubei for inductive top-N ranking. Note that BGS-IMC only accepts the most recent behavior to update the obsolete state for incremental learning, whereas SASREC, BERT4REC and GREC focus on modeling the dynamic patterns in the sequence. Hence, such a comparison is not in favor of BGS-IMC. Interestingly, we see that static BGS-IMC achieves comparable HR results to SOTA sequential models, while consuming a small fraction of running time. From this viewpoint, BGS-IMC is more cost-effective than the compared methods.
To fully understand the performance gap in NDCG, we analyze GS-IMC, BGS-IMC and the best baseline BERT4REC in the graph spectral domain, where we limit the `2 norm of each user’s spectral signals to one and visualize their averaged values in Figure 4. As expected, the energy of GS-IMC and BGS-IMC is concentrated on the low frequencies, since the high-frequency functions are highly penalized during minimization. Furthermore, the proposed prediction-correction update algorithm increases the energy of high-frequency functions. This bears a similarity with BERT4REC of which high-frequency functions are not constrained and can aggressively raise the rankings of unpopular items. This explains why BERT4REC and BGS-IMC have better NDCGs than GS-IMC.
F LIMITATION AND FUTURE WORK
Limitation on sequence modeling. The proposed BGS-IMC method is simple and cannot capture the sophisticated dynamics in the sequence. However, we believe that our work opens the possibility of benefiting sequential recommendation with graph signal processing techniques, for example extended Kalman filter, KalmanNet and Particle filter.
Limitation on sample complexity. The sample complexity is not provided in the paper, and we believe that this is an open problem due to the lack of regularity in the graph which prevent us from defining the idea of sampling “every other node” (the reader is referred to (Anis et al., 2016; Ortega et al., 2018) for more details).
Future work on deep graph learning. Though GS-IMC and BGS-IMC are mainly compared with neural graph models, we note that our approach can help improve the performance of existing graph neural networks including GAT (Veličković et al., 2017) and SAGE (Hamilton et al., 2017), etc. We summarize the following directions for future works: 1) It is interesting to see how GS-IMC takes advantage of content features. One feasible idea is to use GS-IMC as multi-scale wavelets which
can be easily adapted to graph neural networks; 2) BGS-IMC can also be utilized to optimize the aggregation module for the improved robustness, as every neighbor’s representation can be viewed as a measurement of the query node’s representation.
G PROOF OF THEOREM 4
Proof. This proof is analogous to Theorem 1.1 in (Pesenson, 2009), where we extend their results from Sobolev norm to a broader class of positive, monotonically increasing functionals.
Proof of the first part of the Theorem 4.
Suppose that the Laplacian operator Ł has bounded inverse and the fitting error = 0, if y ∈ PWω(G) and ŷk interpolate y on a set Ω = V − Ωc and Ωc admits the Poincare inequality ‖ φ ‖≤ Λ ‖ Łφ ‖ for any φ ∈ L2(Ωc). Then y − ŷk ∈ L2(Ωc) and we have
‖y − ŷk‖ ≤ Λ‖Ł(y − ŷk)‖.
At this point, we can apply Lemma 7 with Λ = a and φ = y− ŷk. It gives the following inequality
‖ y − ŷk ‖≤ Λk ‖ Łk(y − ŷk) ‖
for all k = 2l, l = 0, 1, 2, . . . Since R(λ) is positive and monotonically increasing function, it gives
Λk ‖ Łk(y − ŷk) ‖≤ Λk ‖ R(Ł)k(y − ŷk) ‖ .
Because the interpolant ŷk minimize the norm ‖ R(Ł)k · ‖, we have
‖ R(Ł)k(y − ŷk) ‖≤‖ R(Ł)ky ‖ + ‖ R(Ł)kŷk ‖≤ 2 ‖ R(Ł)ky ‖ .
As for functions y ∈ PWω(G) ⊂ PWR(ω)(G) the Bernstein inequality in Lemma 8 holds
‖ R(Ł)ky ‖≤ R(ω)k ‖ y ‖, k ∈ N.
Putting everything together, we conclude the first part of Theorem 4: ‖ y − ŷk ‖≤ 2 ( ΛR(ω) )k ‖ y ‖,ΛR(ω) < 1, k = 2l, l ∈ N (21)
Proof of the second part of the Theorem 4.
Since ΛR(ω) < 1 holds, it gives the following limit
lim k→∞ (ΛR(ω))k = 0 and lim k→∞ ‖ y − ŷk ‖≤ 0
With the non-negativity of the norm, we have
‖y − ŷk‖ ≥ 0. (22)
This implies the second part of the Theorem 4:
y = lim k→∞ ỹk. (23)
Lemma 7 (restated from Lemma 4.1 in (Pesenson, 2009)). Suppose that Ł is a bounded selfadjoint positive definite operator in a Hilbert space L2(G), and ‖ φ ‖≤ a ‖ Łφ ‖ holds true for any φ ∈ L2(G) and a positive scalar a > 0, then for all k = 2l, l = 0, 1, . . . , the following inequality holds true
‖ φ‖ ≤ ak‖Łkφ ‖ . (24)
Lemma 8 (restated from Theorem 2.1 in (Pesenson, 2008)). A function f ∈ L2(G) belongs to PWω(G) if and only if the following Bernstein inequality holds true for all s ∈ R+
‖ Łsy ‖≤ ωs ‖ y ‖ . (25)
G.1 EXTRA DISCUSSION
In (Pesenson, 2008), the complementary set S = Ωc = V − Ω which admits Poincare inequality is called the Λ-set. Theorem 4 in our paper and Theorem 1.1 in (Pesenson, 2009) state that bandlimited functions y ∈ PWω can be reconstructed from their values on a uniqueness set Ω = V −S. To better understand the concept of Λ-set, we restate Lemma 9 from (Pesenson, 2008) which presents the conditions for Λ-set. It is worth pointing out that (i) the second condition suggests that the vertices from Λ-set would likely be sparsely connected with the uniqueness set Ω; and (ii) the vertices in Λ-set are disconnected with each other or isolated in the subgraph constructed by the vertices S, otherwise there always exists a non-zero function φ ∈ L2(S), ‖ φ ‖6= 0 which makes ‖ Łφ ‖= 0. Lemma 9 (restated from Lemma 3.6 in (Pesenson, 2008)). Suppose that for a set of vertices S ⊂ V (finite or infinite) the following holds true:
1. every point from S is adjacent to a point from the boundary bS, the set of all vertices in V which are not in S but adjacent to a vertex in S;
2. for every v ∈ S there exists at least one adjacent point uv ∈ bS whose adjacency set intersects S only over v;
3. the number Λ = supv∈s d(v) is finite;
Then the set S is a Λ-set which admits the Poincare inequality ‖ φ ‖≤ Λ ‖ Łφ ‖, φ ∈ L2(S). (26)
In our experiments for recommender systems, each user’s ratings might not comply with Poincare inequality. This is because there exists some users who prefer niche products/movies (low-degree nodes). As shown in Fig. 2, user preferences on low-degree nodes are determined by high-frequency functions. When R(ω) is not large enough, Poincare inequality does not hold for such users. This also explains why our model performs poorly for cold items.
Regarding to choice of parameter k, empirical results show that using k ≥ 2 does not help improve the performance, and note that when k is large enough, all kernels will be reduced to bandlimited norm, i.e., R(λ) = 1 if λ ≤ λk ≤ 1, since the gap between eigenvalues shrinks.
H PROOF OF THEOREM 5
Proof. Let ξ denote the random label noise which flips a 1 to 0 with rate ρ, assume that the sample s = y + ξ is observed from y under noise ξ, then for a graph spectral filter Hϕ = (I +R(Ł)/ϕ)−1 with positive ϕ > 0, we have
E [ MSE(y, ŷ) ] = 1
n E ‖ y −Hϕ(y + ξ) ‖2
≤ 1 n E ‖ Hϕξ ‖2 + 1 n ‖ (I−Hϕ)y ‖2, (27)
where the last inequality holds due to the triangular property of matrix norm.
To bound E ‖ Hϕξ ‖2, let Cn = R1/2(ω) ‖ y ‖, then
E ‖ Hϕξ ‖2 (a) = ∑ y(v)=1 ρ(Hϕ,(∗,v) ×−1)2 + (1− ρ)(Hϕ,(∗,v) × 0)2
= ρ ∑
y(v)=1
(Hϕ,(∗,v)y(v)) 2 = ρ ‖ Hϕy ‖2
(b) ≤ sup ‖R1/2(Ł)y‖≤Cn ρ ‖ Hϕy ‖2= sup ‖z‖≤Cn ρ ‖ HϕR−1/2(Ł)z ‖2
= ρC2nσ 2 max ( HϕR −1/2(Ł) )
= ρC2n max l=1,...,n
1 (1 +R(λl)/ϕ)2 1 R(λl)
≤ ρϕ 2C2n
R(λ1)(ϕ+R(λ1))2 , (28)
where (a) follows the definition of the flip random noise ξ and (b) holds to the fact that y is in the Paley-Wiener space PWω(G). As for the second term,
‖ (I−Hϕ)y ‖2 ≤ sup ‖R1/2(Ł)y‖≤Cn ‖ (I−Hϕ)y ‖2
(a) = sup ‖z‖≤Cn ‖ (I−Hϕ)R−1/2(Ł)z ‖2
= C2nσ 2 max ( (I−Hϕ)R−1/2(Ł) ) = C2n max
l=1,...,n
( 1− 1
1 +R(λl)/ϕ )2 1 R(λl)
= C2n ϕ max l=1,...,n
R(λl)/ϕ
(R(λl)/ϕ+ 1)2
(b) ≤ C 2 n
4ϕ . (29)
where (a) holds due to the fact that the eigenvectors of I−Hϕ are the eigenvectors of R(Ł); and (b) follows the simple upper bound x/(1 + x)2 ≤ 1/4 for x ≥ 0. By combing everything together, we conclude the result
E [ MSE(y, ŷ) ] ≤ C 2 n
n ( ρϕ2 R(λ1)(ϕ+R(λ1))2 + 1 4ϕ ) . (30)
H.1 EXTRA DISCUSSION
Choosing ϕ to balance the two terms on the right-hand side above gives ϕ∗ = ∞ for ρ < 1/8 and 1 +R(λ1)/ϕ ∗ = 2ρ1/3 for ρ ≥ 1/8. Plugging in this choice, we have the upper bound if ρ ≥ 18
E [ MSE(y, ŷ) ] ≤ C 2 n
4R(λ1)n (3ρ1/3 − 1), (31)
and if ρ < 18 , then the upper bound is
E [ MSE(y, ŷ) ] ≤ C 2 nρ
4R(λ1)n . (32)
This result implies that we can use a large ϕ to obtain accurate reconstruction when the flip rate ρ is not greater than 1/8, and ϕ need to be carefully tuned when the flip rate ρ is greater than 1/8.
I PROOF OF PROPOSITION 6
As below we present the proof in a Bayesian framework, and the reader is referred to (Maybeck, 1982) for a geometrical interpretation of Monte Carlo estimate statistics.
Proof of the minimal variance
To minimize the estimate variance, we need to minimize the main diagonal of the covariance Pnew: trace ( Pnew ) = trace ( (I−K)P̄new(I−K)> + KΣµK> ) .
Then, we differentiate the trace of Pnew with respect to K d trace ( Pnew ) d K = trace ( 2KP̄new − 2P̄new ) + trace ( 2KΣu ) .
The optimal K which minimizes the variance should satisfy d trace(Pnew)/d K = 0, then it gives
K(I + P̄new) = P̄new.
This implies that the variance of estimate x̂new is minimized when K = P̄new(I + P̄new)−.
Proof of the unbiasedness
Suppose that the obsolete estimate x̂ is unbiased, i.e. Ex̂ = x, then using Eq. (11) we have
E ( x̄new ) = E ( x̂ + F∆s ) = x + F∆s = xnew.
Because of Eq. (12) and the measurement noise ν has zero mean, it gives E ( znew ) = E ( xnew + ν ) = xnew.
Putting everything together, we conclude the following result E ( x̂new ) = E ( x̄new + K(znew − x̄new) ) = xnew + K(xnew − xnew) = xnew. (33)
This implies that the estimate state x̂new is unbiased.
J IMPLEMENTATION DETAILS
In this section, we present the details for our implementation in Section 5 including the additional dataset details, evaluation protocols, model architectures in order for reproducibility. All the experiments are conducted on the machines with Xeon 3175X CPU, 128G memory and P40 GPU with 24 GB memory. The configurations of our environments and packages are listed below:
• Ubuntu 16.04 • CUDA 10.2 • Python 3.7 • Tensorflow 1.15.3 • Pytorch 1.10 • DGL 0.7.1 • NumPy 1.19.0 with MKL Intel
J.1 ADDITIONAL DATASET DETAILS
We use three real-world datasets which are processed in line with (Liang et al., 2018; Steck, 2019): (1) for Koubei2, we keep users with at least 5 records and items that have been purchased by at least 100 users; and (2) for Tmall3, we keep users who click at least 10 items and items which have been seen by at least 200 users; and (3) for Netflix4, we keep all of the users and items. In addition, we chose the random seed as 9876 when splitting the users into training/validation/test sets.
2https://tianchi.aliyun.com/dataset/dataDetail?dataId=53 3https://tianchi.aliyun.com/dataset/dataDetail?dataId=35680 4https://kaggle.com/netflix-inc/netflix-prize-data
J.2 EVALUATION PROTOCOLS
In Figure 5, we illustrate the difference between the transductive ranking and inductive ranking evaluation protocols. In the transductive ranking problem, the model performance is evaluated on the users already known during the model training, whereas the model performance is evaluated on the unseen users in the inductive ranking problems. It is worth noting that in the testing phrase, we sort all interactions of the validation/test users in chronological order, holding out the last one interaction for testing and inductively generating necessary representations on the rest data. In a nutshell, we evaluate our approach and the baselines for the challenging inductive next-item prediction problem.
J.3 EVALUATION METRICS
We adopt hit-rate (HR) and normalized discounted cumulative gain (NDCG) to evaluate the model performance. Suppose that the model provideN recommended items for user u asRu, let Tu denote the interacted items of the user, then HR is computed as follows:
HR@N = Eu 1|Tu∩Ru| (34)
where 1|Ω| is equal to 1 if set Ω is not empty and is equal to 0 otherwise. NDCG evaluates ranking performance by taking the positions of correct items into consideration:
NDCG@N = 1
Z DCG@N =
1
Z N∑ j=1 21|R j u∩Tu| − 1 log2(j + 1) (35)
where Z is the normalized constant that represents the maximum values of DCG@N for Tu.
J.4 GRAPH LAPLACIAN
Let R denote the item-user rating matrix, Dv and De denotes the diagonal degree matrix of vertices and edges respectively, then graph Laplacian matrix used in our experiments is defined as follows:
Ł = I−D−1/2v RD−e R>D−1/2v . (36)
where I is identity matrix.
J.5 DISCUSSION ON PREDICTION FUNCTIONS
In experiments, we focus on making personalized recommendations to the users, so that we are interested in the ranks of the items for each user. Specifically, for top-k ranking problem we choose the items with the k-largest predicted ratings,
Recommendation@k = max |O|=k ∑ v∈O,v/∈Ω+ y(v). (37)
More importantly, our proposed method is also suitable for the link prediction problem, where the goal is classify whether an edge between two vertices exists or not. This can be done by choosing a splitting point to partition the candidate edges into two parts. There are many different ways of choosing such splitting point. One can select the optimal splitting point based on the ROC or AUC results on the validation set.
J.6 MODEL ARCHITECTURES
As mentioned before, we equip IDCF (Wu et al., 2021) with different GNN architectures as the backbone. Here we introduce the details for them.
GAT. We use the GATConv layer available in DGL for implementation. The detailed architecture description is as below:
• A sequence of one-layer GATConv with four heads. • Add self-loop and use batch normalization for graph convolution in each layer.
• Use tanh as the activation. • Use inner product between user embedding and item embedding as ranking score.
GraphSAGE. We use the SAGEConv layer available in DGL for implementation. The detailed architecture description is as below:
• A sequence of two-layer SAGEConv. • Add self-loop and use batch normalization for graph convolution in each layer. • Use ReLU as the activation. • Use inner product between user embedding and item embedding as ranking score.
SGC. We use the SGConv layer available in DGL for implementation. The detailed architecture description is as below:
• One-layer SGConv with two hops. • Add self-loop and use batch normalization for graph convolution in each layer. • Use ReLU as the activation. • Use inner product between user embedding and item embedding as ranking score.
ChebyNet. We use the ChebConv layer available in DGL for implementation. The detailed architecture description is as below:
• One-layer ChebConv with two hops. • Add self-loop and use batch normalization for graph convolution in each layer. • Use ReLU as the activation. • Use inner product between user embedding and item embedding as ranking score.
ARMA. We use the ARMAConv layer available in DGL for implementation. The detailed architecture description is as below:
• One-layer ARMAConv with two hops. • Add self-loop and use batch normalization for graph convolution in each layer. • Use tanh as the activation. • Use inner product between user embedding and item embedding as ranking score.
We also summarize the implementation details of the compared sequential baselines as follows.
SASREC.5 We use the software provided by the authors for experiments. The detailed architecture description is as below:
• A sequence of two-block Transformer with one head. • Use maximum sequence length to 30. • Use inner product between user embedding and item embedding as ranking score.
BERT4REC.6 We use the software provided by the authors for experiments. The detailed architecture description is as below:
• A sequence of three-block Transformer with eight heads. • Use maximum sequence length to 30 with the masked probability 0.2. • Use inner product between user embedding and item embedding as ranking score.
5https://github.com/kang205/SASRec 6https://github.com/FeiSun/BERT4Rec
GREC.7 We use the software provided by the authors for experiments. The detailed architecture description is as below:
• A sequence of six-layer dilated CNN with degree 1, 2, 2, 4, 4, 8. • Use maximum sequence length to 30 with the masked probability 0.2. • Use inner product between user embedding and item embedding as ranking score.
7https://github.com/fajieyuan/WWW2020-grec | 1. What is the focus and contribution of the paper on graph signal processing?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its application to recommender systems?
3. Do you have any concerns about the main contributions of the paper, such as novelty or clarity?
4. How do the authors extend graph signal processing techniques to cope with discrete random label noise?
5. What are the limitations of the proposed method compared to existing network Lasso methods?
6. What is the problem formulation, and how can it be made more precise and clear in the paper?
7. What is the parameter k in Theorem 3, and how should it be chosen?
8. How restrictive is the Poincare condition, and is it satisfied in the numerical experiments?
9. What is \hat{y} in (9), and how does the probability distribution underlying the expectation work?
10. How was the graph (Laplacian) obtained for the numerical experiments, and could you provide more detail?
11. What is the "classical sampling theorem"?
12. Are there any minor errors in the review that need to be addressed? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
Authors extend graph signal processing techniques to cope with discrete random label noise.
Strengths And Weaknesses
Strength: Authors consider a useful application of graph signal processing to recommender systems.
Weaknesses:
The main contributions of the paper are unclear. Is it a novel graph signal model ? Is it a novel graph signal recovery method (Eq. (5)) ?
At least in its application to recommender systems I would like to see a comparison of the proposed methods with existing network Lasso methods as proposed e.g. in
N. Tran, H. Ambos and A. Jung, "Classifying Partially Labeled Networked Data VIA Logistic Network Lasso," ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020, pp. 3832-3836, doi: 10.1109/ICASSP40776.2020.9054408.
A. Jung, "Networked Exponential Families for Big Data Over Networks," in IEEE Access, vol. 8, pp. 202897-202909, 2020, doi: 10.1109/ACCESS.2020.3033817.
Theorem 3 needs more discussion. How to choose the parameter k ? How restrictive is the Poincare condition ? Is this condition satisfied in the numerical experiments ?
The problem formulation needs to be made more precise and placed earlier in the paper. Currently it seems only described in Section 4.1. which is too late.
What is \hat{y} in (9) ?
Pls explain more clearly the probability distribution underlying the expectation in Eq. (9).
Pls discuss more explicitly how the graph (Laplacian) has been obtained for the numerical experiments.
"The classical sampling theorem states that functions..." what is the "classical sampling theorem" ?
minor errors:
"...the testing phrase,.."
"..experiments, we define the hypergraph using matrix R.." unclear what the matrix R is/how obtained.
"..methods are not well suited to our 1-bit matrix completion problem due to the issues of 1-bit quantization..." pls try to be more specific. what are these "issues" ?
Clarity, Quality, Novelty And Reproducibility
see above |
ICLR | Title
Graph Signal Sampling for Inductive One-Bit Matrix Completion: a Closed-form Solution
Abstract
Inductive one-bit matrix completion is motivated by modern applications such as recommender systems, where new users would appear at test stage with the ratings consisting of only ones and no zeros. We propose a unified graph signal sampling framework which enjoys the benefits of graph signal analysis and processing. The key idea is to transform each user’s ratings on the items to a function (graph signal) on the vertices of an item-item graph, then learn structural graph properties to recover the function from its values on certain vertices — the problem of graph signal sampling. We propose a class of regularization functionals that takes into account discrete random label noise in the graph vertex domain, then develop the GS-IMC approach which biases the reconstruction towards functions that vary little between adjacent vertices for noise reduction. Theoretical result shows that accurate reconstructions can be achieved under mild conditions. For the online setting, we develop a Bayesian extension, i.e., BGS-IMC which considers continuous random Gaussian noise in the graph Fourier domain and builds upon a predictioncorrection update algorithm to obtain the unbiased and minimum-variance reconstruction. Both GS-IMC and BGS-IMC have closed-form solutions and thus are highly scalable in large data as verified on public benchmarks.
1 INTRODUCTION
In domains such as recommender systems and social networks, only “likes” (i.e., ones) are observed in the system and service providers (e.g, Netflix) are interested in discovering potential “likes” for existing users to stimulate demand. This motivates the problem of 1-bit matrix completion (OBMC), of which the goal is to recover missing values in an n-by-m item-user matrix R∈ {0, 1}n×m. We note that Ri,j = 1 means that item i is rated by user j, but Ri,j = 0 is essentially unlabeled or unknown which is a mixture of unobserved positive examples and true negative examples.
However, in real world new users, who are not exposed to the model during training, may appear at testing stage. This fact stimulates the development of inductive 1-bit matrix completion, which aims to recover unseen vector y ∈ {0, 1}n from its partial positive entries Ω+ ⊆ {j|yj = 1} at test time. Fig. 1(a) emphasizes the difference between conventional and inductive approaches. More formally, let M∈{0, 1}n×(m+1) denote the underlying matrix, where only a subset of positive examples Ψ is randomly sampled from {(i, j)|Mi,j=1, i≤n, j≤m} such that Ri,j=1 for (i, j)∈Ψ and Ri,j=0 otherwise. Consider (m+1)-th column y out of matrix R, we likewise denote its observations si=1 for i ∈ Ω+ and si=0 otherwise. We note that the sampling process here assumes that there exists a random label noise ξ which flips a 1 to 0 with probability ρ, or equivalently s = y + ξ where
ξi = −1 for i ∈ {j|yj = 1} − Ω+, and ξi = 0 otherwise. (1) Fig. 1(a) presents an example of s,y, ξ to better understand their relationships.
Fundamentally, the reconstruction of true y from corrupted s bears a resemblance with graph signal sampling. Fig. 1(b) shows that the item-user rating matrix R can be used to define a homogeneous
∗Junchi Yan is the correspondence author who is also with Shanghai AI Laboratory. The work was in part supported by NSFC (62222607), STCSM (22511105100).
item-item graph (see Sec 3.1), such that user ratings y/s on items can be regarded as signals residing on graph nodes. The reconstruction of bandlimited graph signals from certain subsets of vertices (see Sec 2) has been extensively studied in graph signal sampling (Pesenson, 2000; 2008).
Despite popularity in areas such as image processing (Shuman et al., 2013; Pang & Cheung, 2017; Cheung et al., 2018) and matrix completion (Romero et al., 2016; Mao et al., 2018; McNeil et al., 2021), graph signal sampling appears less studied in the specific inductive one bit matrix completion problem focused in this paper (see Appendix A for detailed related works). Probably most closely related to our approach are MRFCF (Steck, 2019) and SGMC (Chen et al., 2021) which formulate their solutions as spectral graph filters. However, we argue that these methods are orthogonal to us since they focus on optimizing the rank minimization problem, whereas we optimize the functional minimization problem, thereby making it more convinient and straightforward to process and analyze the matrix data with vertex-frequency analysis (Hammond et al., 2011; Shuman et al., 2013), time-variant analysis (Mao et al., 2018; McNeil et al., 2021), smoothing and filtering (Kalman, 1960; Khan & Moura, 2008). Furthermore, (Steck, 2019; Chen et al., 2021) can be incorporated as special cases of our unified graph signal sampling framework (see Appendix B for detailed discussions).
Another emerging line of research has focused on learning the mapping from side information (or content features) to latent factors (Jain & Dhillon, 2013; Xu et al., 2013; Ying et al., 2018; Zhong et al., 2019). However, it has been recently shown (Zhang & Chen, 2020; Ledent et al., 2021; Wu et al., 2021) that in general this family of algorithms would possibly suffer inferior expressiveness when high-quality content is not available. Further, collecting personal data is likely to be unlawful as well as a breach of the data minimization principle in GDPR (Voigt & Von dem Bussche, 2017).
Much effort has also been made to leverage the advanced graph neural networks (GNN) for improvements. van den Berg et al. (2017) represent the data matrix R by a bipartite graph then generalize the representations to unseen nodes by summing the embeddings over the neighbors. Zhang & Chen (2020) develop graph neural networks which encode the subgraphs around an edge into latent factors then decode the factors back to the value on the edge. Besides, Wu et al. (2021) consider the problem in a downsampled homogeneous graph (i.e., user-user graph in recommender systems) then exploit attention networks to yield inductive representations. The key advantage of our approach is not only the closed form solution which takes a small fraction of training time required for GNNs, but also theory results that guarantee accurate reconstruction and provide guidance for practical applications.
We emphasize the challenges when connecting ideas and methods of graph signal sampling with inductive 1-bit matrix completion — 1-bit quantization and online learning. Specifically, 1-bit quantization raises challenges for formulating the underlying optimization problems: minimizing squared loss on the observed positive examples Ω+ yields a degenerate solution — the vector with all entries equal to one achieves zero loss; minimizing squared loss on the corrupted data s introduces the systematic error due to the random label noise ξ in Eq. (1). To address the issue, we represent the observed data R as a homogeneous graph, then devise a broader class of regularization functionals on graphs to mitigate the impact of discrete random noise ξ. Existing theory for total variation denoising (Sadhanala et al., 2016; 2017) and graph regularization (Belkin et al., 2004; Huang et al., 2011), which takes into account continuous Gaussian noise, does not sufficiently address recoverability in inductive 1-bit matrix completion (see Sec 3.4). We finally mange to derive a closed-form solution, entitled Graph Sampling for Inductive (1-bit) Matrix Completion GS-IMC which biases the reconstruction towards functions that vary little between adjacent vertices for noise reduction.
For online learning, existing matrix factorization methods (Devooght et al., 2015; Volkovs & Yu, 2015; He et al., 2016) incrementally update model parameters via gradient descent, requiring an expensive line search to set the best learning rate. To scale up to large data, we develop a Bayesian extension called BGS-IMC where a prediction-correction algorithm is devised to instantly refreshes the prediction given new incoming data. The prediction step tracks the evolution of the optimization problem such that the predicted iterate does not drift away from the optimum, while the correction step adjusts for the distance between current prediction and the new information at each step. The advantage over baselines is that BGS-IMC considers the uncertainties in the graph Fourier domain, and the prediction-correction algorithm can efficiently provide the unbiased and minimum-variance predictions in closed form, without using gradient descent techniques. The contributions are:
• New Inductive 1-bit Matrix Completion Framework. We propose and technically manage (for the first time to our best knowledge) to introduce graph signal sampling to inductive 1-bit matrix completion. It opens the possibility of benefiting the analysis and processing of the matrix with signal processing toolbox including vertex-frequency analysis (Hammond et al., 2011; Shuman et al., 2013), time-variant analysis (Mao et al., 2018; McNeil et al., 2021), smoothing and filtering (Kalman, 1960; Khan & Moura, 2008) etc. We believe that our unified framework can serve as a new paradigm for 1-bit matrix completion, especially in large-scale and dynamic systems. • Generalized Closed-form Solution. We derive a novel closed-form solution (i.e., GS-IMC) in the graph signal sampling framework, which incorporates existing closed-form solutions as special cases, e.g., (Chen et al., 2021; Steck, 2019). GS-IMC is learned from only positive data with discrete random noise. This is one of key differences to typical denoising methods (Sadhanala et al., 2016) where efforts are spent on removing continuous Gaussian noise from a real-valued signal. • Robustness Enhancement. We consider the online learning scenario and construct a Bayesian extension, i.e., BGS-IMC where a new prediction-correction algorithm is proposed to instantly yield unbiased and minimum-variance predictions given new incoming data. Experiments in Appendix E show that BGS-IMC is more cost-effective than many neural models such as SASREC (Kang & McAuley, 2018), BERT4REC (Sun et al., 2019) and GREC (Yuan et al., 2020). We believe that this proves a potential for the future application of graph signal sampling to sequential recommendation. • Theoretical Guarantee and Empirical Effectiveness. We extend Paley-Wiener theorem of (Pesenson, 2009) on real-valued data to positive-unlabelled data with statistical noise. The theory shows that under mild conditions, unseen rows and columns in training can be recovered from a certain subset of their values that is present at test time. Empirical results on real-world data show that our methods achieve state-of-the-art performance for the challenging inductive Top-N ranking tasks.
2 PRELIMINARIES
In this section, we introduce the notions and provide the necessary background of graph sampling theory. Let G = (V,E,w) denote a weighted, undirected and connected graph, where V is a set of vertices with |V | = n, E is a set of edges formed by the pairs of vertices and the positive weight w(u, v) on each edge is a function of the similarity between vertices u and v.
Space L2(G) is the Hilbert space of all real-valued functions f : V → R with the following norm: ‖ f ‖= √∑ v∈V |f(v)|2, (2)
and the discrete Laplace operator Ł is defined by the formula (Chung & Graham, 1997):
Łf(v) = 1√ d(v) ∑ u∈N (v) w(u, v) ( f(v)√ d(v) − f(u)√ d(u) ) , f ∈ L2(G)
where N (v) signifies the neighborhood of node v and d(v)= ∑ u∈N (v)w(u, v) is the degree of v.
Definition 1 (Graph Fourier Transform). Given a function or signal f in L2(G), the graph Fourier transform and its inverse (Shuman et al., 2013) can be defined as follows:
f̃G = U >f and f = Uf̃ , (3)
where U represents eigenfunctions of discrete Laplace operator Ł, f̃G denotes the signal in the graph Fourier domain and f̃G(λl)=〈f ,ul〉 signifies the information at the frequency λl1. Definition 2 (Bandlimiteness). f ∈L2(G) is called ω-bandlimited function if its Fourier transform f̃G has support in [0, ω], and ω-bandlimited functions form the Paley-Wiener space PWω(G). Definition 3 (Graph Signal Sampling). Given y ∈ PWω(G), y can be recovered from its values on the vertices Ω+ by minimizing below objective (Pesenson, 2000; 2008), with positive scalar k:
min f∈L2(G)
‖ Łkf ‖ s.t., f(v) = y(v), ∀v ∈ Ω+. (4)
Recall that the observation in inductive 1-bit matrix completion consists of only ones and no zeros (i.e., y(v) = 1 for v ∈ Ω+) and ‖ Łk1 ‖= 0. It is obvious that minimizing the loss on the observed entries corresponding to ones, produces a degenerate solution — the vector with all entries equal to one achieves zero loss. From this point of view, existing theory for sampling real-valued signals (Pesenson, 2000; 2008) is not well suited to the inductive 1-bit matrix completion problem.
3 CLOSED-FORM SOLUTION FOR 1-BIT MATRIX COMPLETION
This section builds a unified graph signal sampling framework for inductive 1-bit matrix completion that can inductively recover y from positive ones on set Ω+. The rational behind our framework is that the rows that have similar observations are likely to have similar reconstructions. This makes a lot of sense in practice, for example a user (column) is likely to give similar items (rows) similar scores in recommender systems. To achieve this, we need to construct a homogeneous graph G where the connected vertices represent the rows which have similar observations, so that we can design a class of graph regularized functionals that encourage adjacent vertices on graph G to have similar reconstructed values. In particular, we mange to provide a closed-form solution to the matrix completion problem (entitled GS-IMC), together with theoretical bounds and insights.
3.1 GRAPH DEFINITION
We begin with the introduction of two different kinds of methods to construct homogeneous graphs by using the zero-one matrix R ∈ Rn×m: (i) following the definition of hypergraphs (Zhou et al., 2007), matrix R can be regarded as the incidence matrix, so as to formulate the hypergraph Laplacian matrix as Ł = I − D−1/2v RD−e R>D −1/2 v where Dv ∈ Rn×n (De ∈ Rm×m) is the diagonal degree matrix of vertices (edges); and (ii) for regular graphs, one of the most popular approaches is to utilize the covariance between rows to form the adjacent matrix Ai,j = Cov(Ri,Rj) for i 6= j so that we can define the graph Laplacian matrix as Ł = I−D−1/2v AD−1/2v .
3.2 GRAPH SIGNAL SAMPLING FRAMEWORK
Given a graph G = (V,E), any real-valued column y ∈ Rn can be viewed as a function on G that maps from V to R, and specifically the i-th vector component yi is equivalent to the function value y(i) at the i-th vertex. Now it is obvious that the problem of inductive matrix completion, of which the goal is to recover column y from its values on entries Ω+, bears a resemblance to the problem of graph signal sampling that aims to recover function y from its values on vertices Ω+.
However, most of existing graph signal sampling methods (Romero et al., 2016; Mao et al., 2018; McNeil et al., 2021) yield degenerated solutions when applying them to the 1-bit matrix completion problem. A popular heuristic is to treat some or all of zeros as negative examples Ω−, then to recover y by optimizing the following functional minimization problem, given any k = 2l, l ∈ N:
min f∈L2(G)
‖ [R(Ł)]kf ‖ s.t., ‖ sΩ − fΩ ‖≤ (5)
1To be consistent with (Shuman et al., 2013), ul (l-th column of matrix U) is the l-th eigenvector associated with the eigenvalue λl, and the graph Laplacian eigenvalues carry a notion of frequency.
where recall that s = y + ξ is the observed data corrupted by discrete random noise ξ, and sΩ (fΩ) signifies the values of s (f ) only on Ω = Ω+ ∪Ω−; R(Ł) = ∑ lR(λl)ulu > l denotes the regularized Laplace operator in which {λl} and {ul} are respectively the eigenvalues and eigenfunctions of operator Ł. It is worth noting that s(i) = y(i) + ξ(i) = 0 for i ∈ Ω− is not the true negative data, and hence Ω− will introduce the systematic bias when there exists i ∈ Ω− so that y(i) = 1. The choice of regularization function R(λ) needs to account for two critical criteria: 1) The resulting regularization operator R(Ł) needs to be semi-positive definite. 2) As mentioned before, we expect the reconstruction ŷ to have similar values on adjacent nodes, so that the uneven functions should be penalized more than even functions. To account for this, we adopt the family of positive, monotonically increasing functions (Smola & Kondor, 2003) as present in Table 1.
To the end, we summarize two natural questions concerning our framework: 1) What are the benefits from introducing the regularized Laplacian penalty? It is obvious that minimizing the discrepancy between sΩ and fΩ does not provide the generalization ability to recover unknown values on the rest vertices V − Ω, and Theorem 4 and 5 answer the question by examining the error bounds. 2) What kind of R(Ł) constitutes a reasonable choice? It has been studied in (Huang et al., 2011) that R(Ł) is most appropriate if it is unbiased, and an unbiased R(Ł) reduces variance without incurring any bias on the estimator. We also highlight the empirical study in Appendix C that evaluates how the performance is affected by the definition of graph G and regularization function R(λ).
3.3 CLOSED-FORM SOLUTION
In what follows, we aim to provide a closed-form solution for our unified framework by treating all of the zeros as negative examples, i.e., s(v) = 1 for v ∈ Ω+ and s(v) = 0 otherwise. Then by using the method of Lagrange multipliers, we reformulate Eq. (5) to the following problem:
min f∈L2(G)
1 2 〈f , R(Ł)f〉+ ϕ 2 ‖s− f‖2 , (6)
where ϕ > 0 is a hyperparameter. Obviously, this problem has a closed-form solution:
ŷ = ( I +R(Ł)/ϕ )− s = (∑ l ( 1 +R(λl)/ϕ ) ulu > l )− s = H(Ł)s, (7)
whereH(Ł) = ∑ lH(λl)ulu > l with kernel 1/H(λl) = 1+R(λ)/ϕ, and we exemplifyH(λ) when ϕ = 1 in Table 1. From the viewpoint of spectral graph theory, our GS-IMC approach is essentially a spectral graph filter that amplifies(attenuates) the contributions of low(high)-frequency functions.
Remark. To understand low-frequency and high-frequency functions, Figure 2 presents case studies in the context of recommender systems on the Netflix prize data (Bennett et al., 2007). Specifically, we divide the vertices (items) into four classes: very-high degree (> 5000), high degree (> 2000), medium degree (> 100) and low degree vertices. Then, we report the recall results of all the four classes in different Paley-Wiener spaces PWλ50(G), . . . ,PWλ1000(G) for top-100 ranking prediction. The interesting observation is: (1) the low-frequency functions with eigenvalues less than λ100 contribute nothing to low degree vertices; and (2) the high-frequency functions whose eigenvalues are greater than λ500 do not help to increase the performance on very-high degree vertices. This finding implies that low(high)-frequency functions reflect the user preferences on the popular(cold) items. From this viewpoint, the model defined in Eq. (7) aims to exploit the items with high clickthrough rate with high certainty, which makes sense in commercial applications.
3.4 ERROR ANALYSIS
Our GS-IMC approach defined in Eq. (7) bears a similarity to total variation denoising (Sadhanala et al., 2016; 2017), graph-constrained regularization (Belkin et al., 2004; 2006), and particularly Laplacian shrinkage methods (Huang et al., 2011). However, we argue that the proposed GS-IMC approach is fundamentally different from previous works. Specifically, they operate on real-valued data while GS-IMC deals with positive-unlabeled data. We believe that our problem setting is more complicated, since the unlabeled data is a mixture of unobserved positive examples and true negative examples. In addition, existing methods analyze the recoverability considering statistical noise to be continuous Gaussian, e.g., Theorem 3 (Sadhanala et al., 2016), Theorem 1.1 (Pesenson, 2009) etc.
However, we study upper bound of GS-IMC in the presence of discrete random label noise ξ. Specifically, Theorem 4 extends Paley-Wiener theorem of (Pesenson, 2009) on real-valued data to positiveunlabelled data, showing that a bandlimited function y can be recovered from its values on certain set Ω. Theorem 5 takes into account statistical noise ξ and shows that a bandlimited function y can be accurately reconstructed if C2n = C > 0 is a constant, not growing with n.
Theorem 4 (Error Analysis, extension of Theorem 1.1 in (Pesenson, 2009)). Given R(λ) with λ ≤ R(λ) on graph G = (V,E), assume that Ωc = V − Ω admits the Poincare inequality ‖ φ ‖≤ Λ ‖ Łφ ‖ for any φ ∈ L2(Ωc) with Λ > 0, then for any y ∈ PWω(G) with 0 < ω ≤ R(ω) < 1/Λ,
‖ y − ŷk ‖≤ 2 ( ΛR(ω) )k ‖ y ‖ and y = lim
k→∞ ŷk (8)
where k is a pre-specified hyperparameter and ŷk is the solution of Eq. (5) with = 0.
Remark. Theorem 4 indicates that a better estimate of y can be achieved by simply using a higher k, but there is a trade-off between accuracy of the estimate on one hand, and complexity and numerical stability on the other. We found by experiments that GS-IMC with k = 1 can achieve SOTA results for inductive top-N recommendation on benchmarks. We provide more discussions in Appendix G. Theorem 5 (Error Analysis, with label noise). Suppose that ξ is the random noise with flip rate ρ, and positive λ1 ≤ · · · ≤ λn are eigenvalues of Laplacian Ł, then for any function y ∈ PWω(G),
E [ MSE(y, ŷ) ] ≤ C 2 n
n ( ρ R(λ1)(1 +R(λ1)/ϕ)2 + 1 4ϕ ) , (9)
where C2n = R(ω) ‖ y ‖2, ϕ is the regularization parameter and ŷ is defined in Eq. (7).
Remark. Theorem 5 shows that for a constantC2n = C > 0 (not growing with n), the reconstruction error converges to zero as n is large enough. Also, the reconstruction error decreases with R(ω) declining which means low-frequency functions can be recovered more easily than high-frequency functions. We provide more discussions on ϕ, ρ in Appendix H.
4 BAYESIAN GS-IMC FOR ONLINE LEARNING
In general, an inductive learning approach such as GAT (Veličković et al., 2017) and SAGE (Hamilton et al., 2017), etc., can naturally cope with the online learning scenario where the prediction is refreshed given a newly observed example. Essentially, GS-IMC is an inductive learning approach that can update the prediction, more effective than previous matrix completion methods (Devooght et al., 2015; He et al., 2016). Let ∆s denote newly coming data that might be one-hot as in Fig. 3(a), ŷ denotes original prediction based on data s, then we can efficiently update ŷ to ŷnew as follows:
ŷnew = H(Ł)(s + ∆s) = ŷ +H(Ł)∆s. (10)
However, we argue that GS-IMC ingests the new data in an unrealistic, suboptimal way. Specifically, it does not take into account the model uncertainties, assuming that the observed positive data is noise-free. This assumption limits model’s fidelity and flexibility for real applications. In addition, it assigns a uniform weight to each sample, assuming that the innovation, i.e., the difference between the current a priori prediction and the current observation information, is equal for all samples.
4.1 PROBLEM FORMULATION
To model the uncertainties, we denote a measurement by z=Uŷ (Fourier basis U) which represents prediction ŷ in the graph Fourier domain and we assume that z is determined by a stochastic process.
In Fig. 3(b), measurement z is governed by hidden state x and noise ν captures the data uncertainties in an implicit manner. The choice of state transition equation need to account for two critical criteria: (1) the model uncertainties need to be considered. (2) the transition from state x to state xnew need to represent the evolution of predictions ŷ/ŷy defined in Eq. (10).
To account for this, we propose a Bayesian extension of GS-IMC, entitled BGS-IMC, which considers the stochastic filtering problem in a dynamic state-space form:
xnew = x + F∆s + η (11) znew = xnew + ν (12)
where Eq. (11) essentially follows Eq. (10) in the graph Fourier domain, i.e., multiplying both sides of Eq. (10) by U. In control theory, F = UH(Ł) is called the input matrix and ∆s represents the system input vector. The state equation (11) describes how the true state x,xnew evolves under the impact of the process noise η ∼ N (0,Ση), and the measurement equation (12) characterizes how a measurement znew = U>(s + ∆s) of the true state xnew is corrupted by the measurement noise ν ∼ N (0,Σν). It is worth noting that larger determinant of Σν means that data points are more dispersed, while for Ση large determinant implies that BGS-IMC is not sufficiently expressive and it is better to use measurement for decision making, i.e., BGS-IMC is reduced to GS-IMC.
Using Bayes rule, the posterior is given by:
p(xnew|∆s, znew) ∝ p(znew|xnew)p(xnew|∆s), (13)
where p(znew|xnew) and p(xnew|∆s) follow a Gauss-Markov process.
4.2 PREDICTION-CORRECTION UPDATE ALGORITHM
To make an accurate prediction, we propose a prediction-correction update algorithm, resembling workhorse Kalman filtering-based approaches (Kalman, 1960; Wiener et al., 1964). To our knowledge, the class of prediction-correction methods appears less studied in the domain of 1-bit matrix completion, despite its popularity in time-series forecasting (Simonetto et al., 2016; de Bézenac et al., 2020) and computer vision (Matthies et al., 1989; Scharstein & Szeliski, 2002).
In the prediction step, we follow the evolution of the state as defined in Eq. (11) to compute the mean and the covariance of conditional p(xnew|∆s):
E[xnew|∆s] = x̂ + F∆s = x̄new and Var(xnew|∆s) = P + Ση = P̄new, (14)
where x̂ is the estimate state of x and P is the estimate covariance, i.e., P= E(x − x̂)(x − x̂)>, while x̄new, P̄new are the extrapolated estimate state and covariance respectively. Meanwhile, it is easy to obtain the mean and the covariance of conditional p(znew|xnew):
E[znew|xnew] = E[xnew + ν] = xnew and Var(znew|xnew) = E[νν>] = Σν . (15)
In the correction step, we combine Eq. (13) with Eq. (14) and (15): p(xnew|∆s, znew)∝exp ( (xnew− znew)>Σ−ν (xnew− znew) + (xnew− x̄new)>P̄−new(xnew− x̄new) ) .
By solving ∂ ln p(xnew|∆s, znew)/∂xnew = 0, we have the following corrected estimate state x̂new and covariance Pnew, where we recall that the new measurement is defined as znew =U>(s + ∆s):
x̂new = x̄new + K(znew − x̄new) (16) Pnew = (I−K)P̄new(I−K)> + KΣνK> (17)
K = P̄new(P̄new + Σν) −, (18)
where K is the Kalman gain and znew − x̄new is called the innovation. It is worth noting that Eq. (16) adjusts the predicted iterate x̄new in terms of the innovation, the key difference to GS-IMC and existing methods, e.g., GAT (Veličković et al., 2017) and SAGE (Hamilton et al., 2017).
Remark. The BGS-IMC approach is highly scalable in Paley-Wiener spaces. Let PWω(G) be the span of k ( n) eigenfunctions whose eigenvalues are no greater than ω, then the transition matrix F in (11) is k-by-n and every covariance matrix is of size k× k. Computationally, when P,Ση,Σν are diagonal, it takes O(k2) time to compute x̂new and Pnew, and O(nk) time for x̄new and P̄new. The total time complexity is O(nk + k2), linear to the number of vertices n. Further, Proposition 6 shows that x̂new in (16) is an unbiased and minimum-variance estimator. Proposition 6. Given an observation ∆s, provided F is known, x̂new obtained in Eq. (16) is the optimal linear estimator in the sense that it is unbiased and minimum-variance.
To summarize, the complete procedure of BGS-IMC is to first specify Ση,Σν ,P using prior knowledge, then to calculate extrapolated state x̄new using (14), and finally to obtain x̂new using (16) so that we have the updated model prediction as ŷnew = Ux̂new that ingests the new observation.
5 EXPERIMENT
This section evaluates GS-IMC (in Section 3) and BGS-IMC (in Section 4) on real-world datasets. All the experiments are conducted on the machines with Xeon 3175X CPU, 128G memory and P40 GPU with 24 GB memory. The source code and models will be made publicly available.
5.1 EXPERIMENTAL SETUP
We adopt three large real-world datasets widely used for evaluating recommendation algorithms: (1) Koubei (1, 828, 250 ratings of 212, 831 users and 10, 213 items); (2) Tmall (7, 632, 826 ratings of 320, 497 users and 21, 876 items); (3) Netflix (100, 444, 166 ratings of 400, 498 users and 17, 770 items). For each dataset, we follow the experimental protocols in (Liang et al., 2018; Wu et al., 2017a) for inductive top-N ranking, where the users are split into training/validation/test set with ratio 8 : 1 : 1. Then, we use all the data from the training users to optimize the model parameters. In the testing phase, we sort all interactions of the validation/test users in chronological order, holding out the last one interaction for testing and inductively generating necessary representations using the rest data. The results in terms of hit-rate (HR) and normalized discounted cumulative gain (NDCG) are reported on the test set for the model which delivers the best results on the validation set.
We implement our method in Apache Spark with Intel MKL, where matrix computation is parallelized and distributed. In experiments, we denote item-user rating matrix by R and further define the Laplacian Ł = I−D−1/2v RD−e R>D −1/2 v . We set a=4, γ=1, ϕ=10 for GS-IMC, while we set the covariance to Ση=Σν=10−4I and initialize P using the validation data for BGS-IMC. In the test stage, if a user has |Ω| training interactions, BGS-IMC uses first |Ω|−1 interactions to produce initial state x̂, then feed last interaction to simulate the online update.
In the literature, there are few of existing works that enable inductive inference for topN ranking only using the ratings. To make thorough comparisons, we prefer to strengthen IDCF with GCMC for the improved performance (IDCF+ for short) rather than report the results of IDCF (Wu et al., 2021) and GCMC (van den Berg et al., 2017) as individuals. Furthermore, we study their performance with different graph neural networks including ChebyNet (Defferrard et al., 2016), GAT (Veličković et al., 2017), GraphSage (Hamilton et al., 2017), SGC (Wu et al., 2019) and ARMA (Bianchi et al., 2021). We adopt the Adam optimizer (Kingma & Ba, 2015) with the learning rate decayed by 0.98 every epoch. We search by grid the learning rate and L2 regularizer in {0.1, 0.01, . . . , 0.00001}, the dropout rate over {0.1, 0.2, . . . , 0.7} and the latent factor size ranging {32, 64, . . . , 512} for the optimal performance. In addition, we also report the results of the shallow models i.e., MRCF (Steck, 2019) and SGMC (Chen et al., 2021) which are most closely related to our proposed method. The software provided by the authors is used in the experiments.
We omit the results of Markov chain Monte Carlo based FISM (He & McAuley, 2016), variational auto-encoder based MultVAE (Liang et al., 2018), scalable Collrank (Wu et al., 2017b), graph neural networks GCMC (van den Berg et al., 2017) and NGCF (Wang et al., 2019), as their accuracies were found below on par in SGMC (Chen et al., 2021) and IDCF (Wu et al., 2021).
5.2 ACCURACY COMPARISON
In this section, GS-IMC and BGS-IMC assume that the underlying signal is λ1000-bandlimited, and we compare them with eight state-of-the-arts graph based baselines, including spatial graph models (i.e., IDCF (Wu et al., 2021), IDCF+GAT (Veličković et al., 2017), IDCF+GraphSAGE (Hamilton et al., 2017)), approximate spectral graph models with high-order polynomials (i.e., IDCF+SGC (Wu et al., 2019), IDCF+ChebyNet (Defferrard et al., 2016), IDCF+ARMA (Bianchi et al., 2021)) and exact spectral graph models (i.e., MRFCF (Steck, 2019) and SGMC (Chen et al., 2021)).
In Table 2 and Table 3, the results on the real-world Koubei, Tmall and Netflix show that BGS-IMC outperforms all the baselines on all the datasets. Note that MRFCF (Steck, 2019) is the full rank version of GS-IMC with (one-step) random walk regularization. We can see that MRFCF underperforms its counterpart on all the three datasets, which demonstrates the advantage of the bandlimited assumption for inductive top-N ranking tasks. Further, BGS-IMC consistently outperforms GS-IMC on all three datasets by margin which proves the efficacy of the prediction-correction algorithm for incremental updates. Additionally, we provide extensive ablation studies in Appendix C, scalability studies in Appendix D and more comparisons with SOTA sequential models in Appendix E.
To summarize, the reason why the proposed method can further improve the prediction accuracy is due to 1) GS-IMC exploits the structural information in the 1-bit matrix to mitigate the negative influence of discrete label noise in the graph vertex domain; and 2) BGS-IMC further improves the prediction accuracy by considering continuous Gaussian noise in the graph Fourier domain and yielding unbiased and minimum-variance predictions using prediction-correction update algorithm.
6 CONCLUSION
We have introduced a unified graph signal sampling framework for inductive 1-bit matrix completion, together with theoretical bounds and insights. Specifically, GS-IMC is devised to learn the structural information in the 1-bit matrix to mitigate the negative influence of discrete label noise in the graph vertex domain. Second, BGS-IMC takes into account the model uncertainties in the graph Fourier domain and provides a prediction-correction update algorithm to obtain the unbiased and minimum-variance reconstructions. Both GS-IMC and BGS-IMC have closed-form solutions and are highly scalable. Experiments on the task of inductive top-N ranking have shown the supremacy.
A RELATED WORK
Inductive matrix completion. There has been a flurry of research on problem of inductive matrix completion (Chiang et al., 2018; Jain & Dhillon, 2013; Xu et al., 2013; Zhong et al., 2019), which leverage side information (or content features) in the form of feature vectors to predict inductively on new rows and columns. The intuition behind this family of algorithms is to learn mappings from the feature space to the latent factor space, such that inductive matrix completion methods can adapt to new rows and columns without retraining. However, it has been recently shown (Zhang & Chen, 2020; Ledent et al., 2021; Wu et al., 2021) that inductive matrix completion methods provide limited performance due to the inferior expressiveness of the feature space. On the other hand, the prediction accuracy has strong constraints on the content quality, but in practice the high quality content is becoming hard to collect due to legal risks (Voigt & Von dem Bussche, 2017). By contrast, one advantage of our approach is the capacity of inductive learning without using side information.
Graph neural networks. Inductive representation learning over graph structured data has received significant attention recently due to its ubiquitous applicability. Among the existing works, GraphSAGE (Hamilton et al., 2017) and GAT (Veličković et al., 2017) propose to generate embeddings for previously unseen data by sampling and aggregating features from a node’s local neighbors. In the meantime, various approaches such as ChebyNet (Defferrard et al., 2016) and GCN (Kipf & Welling, 2016) exploit convolutional neural networks to capture sophisticated feature information but are generally less scalable. To address the scalability issue, Wu et al. (2019) develop simplified graph convolutional networks (SGCN) which utilize polynomial filters to simulate the stacked graph convolutional layers. Furthermore, Bianchi et al. (2021) extend auto-regressive moving average (ARMA) filters to convolutional layers for broader frequency responses.
To leverage recent advance in graph neural networks, lightGCN (He et al., 2020), GCMC (van den Berg et al., 2017) and PinSAGE (Ying et al., 2018) represent the matrix by a bipartite graph then generalize the representations to unseen nodes by summing the content-based embeddings over the neighbors. Differently, IGMC (Zhang & Chen, 2020) trains graph neural networks which encode the subgraphs around an edge into latent factors then decode the factors back to the value on the edge. Recently, IDCF (Wu et al., 2021) studies the problem in a downsampled homogeneous graph (i.e., user-user graph in recommender systems) then applies attention networks to yield inductive representations. Probably most closely related to our approach are IDCF (Wu et al., 2021) and IGMC (Zhang & Chen, 2020) which do not assume any side information, such as user profiles and item properties. The key advantage of our approach is not only the closed form solution for efficient GNNs training, but also the theoretical results which guarantee the reconstruction of unseen rows and columns and the practical guidance for potential improvements.
Graph signal sampling. In general, graph signal sampling aims to reconstruct real-valued functions defined on the vertices (i.e., graph signals) from their values on certain subset of vertices. Existing approaches commonly build upon the assumption of bandlimitedness, by which the signal of interest lies in the span of leading eigenfunctions of the graph Laplacian (Pesenson, 2000; 2008). It is worth noting that we are not the first to consider the connections between graph signal sampling and matrix completion, as recent work by Romero et al. (Romero et al., 2016) has proposed a unifying kernel based framework to broaden both of graph signal sampling and matrix completion perspectives. However, we argue that Romero’s work and its successors (Benzi et al., 2016; Mao et al., 2018; McNeil et al., 2021) are orthogonal to our approach as they mainly focus on real-valued matrix completion in the transductive manner. Specifically, our approach concerns two challenging problems when connecting the ideas and methods of graph signal sampling with inductive one-bit matrix completion — one-bit quantization and online learning.
To satisfy the requirement of online learning, existing works learn the parameters for new rows and columns by performing either stochastic gradient descent used in MCEX (Giménez-Febrer et al., 2019), or alternating least squares used in eALS (He et al., 2016). The advantage of BGS-IMC is three fold: (i) BGS-IMC has closed form solutions, bypassing the well-known difficulty for tuning
learning rate; and (ii) BGS-IMC considers the random Gaussian noise in the graph Fourier domain, characterizing the uncertainties in the measurement and modeling; (iii) prediction-correction algorithm, resembling Kalman filtering, can provide unbiased and minimum-variance reconstructions.
Probably most closely related to our approach are SGMC (Chen et al., 2021) and MRFCF (Steck, 2019) in the sense that both of them formulate their solutions as spectral graph filters and can be regarded as methods for data filtering in domains of discrete signal processing. More specifically, SGMC optimizes latent factors V,U by minimizing the normalized matrix reconstruction error:
min U,V ‖ D−1/2v RD−1/2e −VU ‖, s.t. ‖ U ‖≤ , ‖ V ‖≤ η, (19)
while MRFCF minimizes the following matrix reconstruction error:
min X ‖ R−XR ‖ +λ ‖ X ‖ s.t. diag(X) = 0, (20)
where the diagonal entries of parameter X is forced to zero. It is obvious now that both SGMC and MRFCF focus on minimizing the matrix reconstruction problem. This is one of the key differences to our graph signal sampling framework which optimizes the functional minimization problem as defined in Eq. 5. We argue that our problem formulation is more suitable for the problem of inductive one-bit matrix completion, since it focuses on the reconstruction of bandlimited functions, no matter if the function is observed in the training or at test time. Perhaps more importantly, both of methods (Chen et al., 2021; Steck, 2019) can be included as special cases of our framework. We believe that a unified framework cross graph signal sampling and inductive matrix completion could benefit both fields, since the modeling knowledge from both domains can be more deeply shared.
Advantages of graph signal sampling perspectives. A graph signal sampling perspective requires to model 1-bit matrix data as signals on a graph and formulate the objective in the functional space. Doing so opens the possibility of processing, filtering and analyzing the matrix data with vertexfrequency analysis (Hammond et al., 2011; Shuman et al., 2013), time-variant analysis (Mao et al., 2018; McNeil et al., 2021), smoothing and filtering (Kalman, 1960; Khan & Moura, 2008) etc. In this paper, we technically explore the use of graph spectral filters to inductively recover the missing values of matrix, Kalman-filtering based approach to deal with the streaming data in online learning scenario, and vertex-frequency analysis to discover the advantages of dynamic BERT4REC model over static BGS-IMC model. We believe that our graph signal sampling framework can serve as a new paradigm for 1-bit matrix completion, especially in large-scale and dynamic systems.
B GENERALIZING SGMC AND MRFCF
This section shows how GS-IMC generalizes SGMC (Chen et al., 2021) and MRFCF (Steck, 2019).
GS-IMC generalizes SGMC. Given the observation R, we follow standard routine of hypergraph (Zhou et al., 2007) to calculate the hypergraph Laplacian matrix Ł = I −D−1/2v RD−e R>D −1/2 v , where Dv (De) is the diagonal degree matrix of vertices (edges). Then the rank-k approximation (see Eq. (9) in (Chen et al., 2021)) is equivalent to our result using bandlimited norm R(λ) = 1 if λ ≤ λk and R(λ) =∞ otherwise,
ŷ = (∑
l
( 1 +R(λl)/ϕ ) ulu > l )− s = ∑ l≤k ulu > l s = UkU > k s
where we set ϕ = ∞ and limϕ→∞R(λ)/ϕ = ∞ for λ > λk, and matrix Uk comprises k leading eigenvectors whose eigenvalues are less than or equal to λk.
GS-IMC generalizes MRFCF. Given R, we simply adopt the correlation relationship to construct the affinity matrix and define the Laplacian as Ł = 2I − D−1/2v RR>D−1/2v . Then the matrix approximation (see Eq. (4) in (Steck, 2019)) is equivalent to our GS-IMC approach using one-step
random walk norm,
ŷ = (∑
l
( 1 + 1
a− λ
) ulu > l )− s
= ∑ l ( 1− 1 a− λ+ 1 ) ulu > l s
= { I− ( (a+ 1)I− Ł )−} s
= { I− (
(a− 1)I + D1/2v RR >D1/2v
)−} s
where we set ϕ = 1 and a ≥ λmax is a pre-specified parameter for the random walk regularization.
C ABLATION STUDIES
This study evaluates how GS-IMC and BGS-IMC perform with different choice of the regularization function and the graph definition. In the following, we assume the underlying signal to recover is in the Paley-Wiener space PWλ1000(G), and hence we only take the first 1000 eigenfunctions whose eigenvalues are not greater than λ1000 to make predictions.
C.1 IMPACT OF REGULARIZATION FUNCTIONS
Table 4 and 5 show that for the proposed GS-IMC models, Tikhonov regularization produces the best HR and NDCG results on both Koubei and Netflix, while Diffusion process regularization performs the best on Tmall. Meanwhile, BGS-IMC with random walk regularization achieves the best HR and NDCG results on Koubei, while Tikhonov regularization and Diffusion process regularization are best on Tmall and Netflix. Perhaps more importantly, BGS-IMC consistently outperforms GS-IMC on all three datasets by margin which proves the efficacy of the prediction-correction algorithm.
We highlight the reason why BGS-IMC can further improve the performance of GS-IMC is due to the fact that BGS-IMC considers Gaussian noise in the Fourier domain and the prediction-correction update algorithm is capable of providing unbiased and minimum-variance predictions.
C.2 IMPACT OF GRAPH DEFINITIONS
Table 6 present the HR and NDCG results of GS-IMC with one-step random walk regularization on the Netflix prize data. To avoid the clutter, we omit the results of GS-IMC with other regularization functions, since their results share the same trends. It seems that the regular graph that use covariance matrix as the affinity matrix has better HR and NDCG results when recommending 10 and 50 items, while the hypergraph helps achieve better results when recommending 100 items.
D SCALABILITY STUDIES
The solution for either GS-IMC or BGS-IMC requires to compute leading eigenvetors whose eigenvalues are less than or equal to pre-specified ω. However, one might argue that it is computationally intractable on the industry-scale datasets. To address the concerns, one feasible approach is to perform the Nyström (Fowlkes et al., 2004) method to obtain the leading eigenvectors. For the completeness of the paper, we present the pseudo-code of the approximate eigendecomposition (Chen et al., 2021) in Algorithm 1, of which the computational complexity is O(lnk + k3) where n is the number of columns in Ł, l is the number of sampled columns and k is the number of eigenvectors to compute. This reduces the overhead from O(n3) to O(lnk + k3), linear to the number of rows. To evaluate how the proposed GS-IMC and BGS-IMC methods perform with the approximate eigenvectors, we conduct the experiments on the largest Netflix prize data. Table 7 reports the HR, NDCG and runtime results for the standard GS-IMC and BGS-IMC methods, and their scalable versions entitled GS-IMCs and BGS-IMCs. To make the comparison complete, we also present the results of neural IDCF (Wu et al., 2021) model equipped with ChebyNet (Defferrard et al., 2016). It is obvious that the standard GS-IMC and BGS-IMC methods consume only a small fraction of training time, required by graph neural networks. Meanwhile, GS-IMCs achieves comparable ranking
Algorithm 1 Approximate Eigendecomposition Require: n × l matrix C derived from l columns sampled from n × n kernel matrix L without
replacement, l × l matrix A composed of the intersection of these l columns, l × l matrix W, rank k, the oversampling parameter p and the number of power iterations q.
Ensure: approximate eigenvalues Σ̃ and eigenvectors Ũ. 1: Generate a random Gaussian matrix Ω ∈ Rl×(k+p), then compute the sample matrix AqΩ. 2: Perform QR-Decomposition on AqΩ to obtain an orthonormal matrix Q that satisfies the equa-
tion AqΩ = QQ>AqΩ, then solve ZQ>Ω = Q>WΩ. 3: Compute the eigenvalue decomposition on the (k + p)-by-(k + p) matrix Z, i.e., Z =
UZΣZUZ >, to obtain UW = QUZ [:, : k] and ΣW = ΣZ [: k, : k].
4: Return Σ̃← ΣW , Ũ← CA−1/2UWΣ−1/2W .
performance to GS-IMC, while improving the efficiency by 8X. Likewise, BGS-IMCs enjoys the improvement in the system scalability without significant loss in prediction accuracy. The overall results demonstrate that GS-IMC and BGS-IMC are highly scalable in very large data.
E SPECTRUM ANALYSIS AND DISCUSSION WITH SEQUENTIAL MODELS
We compare BGS-IMC with recent sequential recommendation models, including Transformer-based SASREC (Kang & McAuley, 2018), BERT-based BERT4REC (Sun et al., 2019) and causal CNN based GREC (Yuan et al., 2020). We choose the embedding size of 256 and search the optimal hyper-parameters by grid. Each model is configured using the
same parameters provided by the original paper i.e., two attention blocks with one head for SASREC, three attention blocks with eight heads for BERT4REC and six dilated CNNs with degrees 1, 2, 2, 4, 4, 8 for GREC.
Table 8 presents HR and NDCG results on Koubei for inductive top-N ranking. Note that BGS-IMC only accepts the most recent behavior to update the obsolete state for incremental learning, whereas SASREC, BERT4REC and GREC focus on modeling the dynamic patterns in the sequence. Hence, such a comparison is not in favor of BGS-IMC. Interestingly, we see that static BGS-IMC achieves comparable HR results to SOTA sequential models, while consuming a small fraction of running time. From this viewpoint, BGS-IMC is more cost-effective than the compared methods.
To fully understand the performance gap in NDCG, we analyze GS-IMC, BGS-IMC and the best baseline BERT4REC in the graph spectral domain, where we limit the `2 norm of each user’s spectral signals to one and visualize their averaged values in Figure 4. As expected, the energy of GS-IMC and BGS-IMC is concentrated on the low frequencies, since the high-frequency functions are highly penalized during minimization. Furthermore, the proposed prediction-correction update algorithm increases the energy of high-frequency functions. This bears a similarity with BERT4REC of which high-frequency functions are not constrained and can aggressively raise the rankings of unpopular items. This explains why BERT4REC and BGS-IMC have better NDCGs than GS-IMC.
F LIMITATION AND FUTURE WORK
Limitation on sequence modeling. The proposed BGS-IMC method is simple and cannot capture the sophisticated dynamics in the sequence. However, we believe that our work opens the possibility of benefiting sequential recommendation with graph signal processing techniques, for example extended Kalman filter, KalmanNet and Particle filter.
Limitation on sample complexity. The sample complexity is not provided in the paper, and we believe that this is an open problem due to the lack of regularity in the graph which prevent us from defining the idea of sampling “every other node” (the reader is referred to (Anis et al., 2016; Ortega et al., 2018) for more details).
Future work on deep graph learning. Though GS-IMC and BGS-IMC are mainly compared with neural graph models, we note that our approach can help improve the performance of existing graph neural networks including GAT (Veličković et al., 2017) and SAGE (Hamilton et al., 2017), etc. We summarize the following directions for future works: 1) It is interesting to see how GS-IMC takes advantage of content features. One feasible idea is to use GS-IMC as multi-scale wavelets which
can be easily adapted to graph neural networks; 2) BGS-IMC can also be utilized to optimize the aggregation module for the improved robustness, as every neighbor’s representation can be viewed as a measurement of the query node’s representation.
G PROOF OF THEOREM 4
Proof. This proof is analogous to Theorem 1.1 in (Pesenson, 2009), where we extend their results from Sobolev norm to a broader class of positive, monotonically increasing functionals.
Proof of the first part of the Theorem 4.
Suppose that the Laplacian operator Ł has bounded inverse and the fitting error = 0, if y ∈ PWω(G) and ŷk interpolate y on a set Ω = V − Ωc and Ωc admits the Poincare inequality ‖ φ ‖≤ Λ ‖ Łφ ‖ for any φ ∈ L2(Ωc). Then y − ŷk ∈ L2(Ωc) and we have
‖y − ŷk‖ ≤ Λ‖Ł(y − ŷk)‖.
At this point, we can apply Lemma 7 with Λ = a and φ = y− ŷk. It gives the following inequality
‖ y − ŷk ‖≤ Λk ‖ Łk(y − ŷk) ‖
for all k = 2l, l = 0, 1, 2, . . . Since R(λ) is positive and monotonically increasing function, it gives
Λk ‖ Łk(y − ŷk) ‖≤ Λk ‖ R(Ł)k(y − ŷk) ‖ .
Because the interpolant ŷk minimize the norm ‖ R(Ł)k · ‖, we have
‖ R(Ł)k(y − ŷk) ‖≤‖ R(Ł)ky ‖ + ‖ R(Ł)kŷk ‖≤ 2 ‖ R(Ł)ky ‖ .
As for functions y ∈ PWω(G) ⊂ PWR(ω)(G) the Bernstein inequality in Lemma 8 holds
‖ R(Ł)ky ‖≤ R(ω)k ‖ y ‖, k ∈ N.
Putting everything together, we conclude the first part of Theorem 4: ‖ y − ŷk ‖≤ 2 ( ΛR(ω) )k ‖ y ‖,ΛR(ω) < 1, k = 2l, l ∈ N (21)
Proof of the second part of the Theorem 4.
Since ΛR(ω) < 1 holds, it gives the following limit
lim k→∞ (ΛR(ω))k = 0 and lim k→∞ ‖ y − ŷk ‖≤ 0
With the non-negativity of the norm, we have
‖y − ŷk‖ ≥ 0. (22)
This implies the second part of the Theorem 4:
y = lim k→∞ ỹk. (23)
Lemma 7 (restated from Lemma 4.1 in (Pesenson, 2009)). Suppose that Ł is a bounded selfadjoint positive definite operator in a Hilbert space L2(G), and ‖ φ ‖≤ a ‖ Łφ ‖ holds true for any φ ∈ L2(G) and a positive scalar a > 0, then for all k = 2l, l = 0, 1, . . . , the following inequality holds true
‖ φ‖ ≤ ak‖Łkφ ‖ . (24)
Lemma 8 (restated from Theorem 2.1 in (Pesenson, 2008)). A function f ∈ L2(G) belongs to PWω(G) if and only if the following Bernstein inequality holds true for all s ∈ R+
‖ Łsy ‖≤ ωs ‖ y ‖ . (25)
G.1 EXTRA DISCUSSION
In (Pesenson, 2008), the complementary set S = Ωc = V − Ω which admits Poincare inequality is called the Λ-set. Theorem 4 in our paper and Theorem 1.1 in (Pesenson, 2009) state that bandlimited functions y ∈ PWω can be reconstructed from their values on a uniqueness set Ω = V −S. To better understand the concept of Λ-set, we restate Lemma 9 from (Pesenson, 2008) which presents the conditions for Λ-set. It is worth pointing out that (i) the second condition suggests that the vertices from Λ-set would likely be sparsely connected with the uniqueness set Ω; and (ii) the vertices in Λ-set are disconnected with each other or isolated in the subgraph constructed by the vertices S, otherwise there always exists a non-zero function φ ∈ L2(S), ‖ φ ‖6= 0 which makes ‖ Łφ ‖= 0. Lemma 9 (restated from Lemma 3.6 in (Pesenson, 2008)). Suppose that for a set of vertices S ⊂ V (finite or infinite) the following holds true:
1. every point from S is adjacent to a point from the boundary bS, the set of all vertices in V which are not in S but adjacent to a vertex in S;
2. for every v ∈ S there exists at least one adjacent point uv ∈ bS whose adjacency set intersects S only over v;
3. the number Λ = supv∈s d(v) is finite;
Then the set S is a Λ-set which admits the Poincare inequality ‖ φ ‖≤ Λ ‖ Łφ ‖, φ ∈ L2(S). (26)
In our experiments for recommender systems, each user’s ratings might not comply with Poincare inequality. This is because there exists some users who prefer niche products/movies (low-degree nodes). As shown in Fig. 2, user preferences on low-degree nodes are determined by high-frequency functions. When R(ω) is not large enough, Poincare inequality does not hold for such users. This also explains why our model performs poorly for cold items.
Regarding to choice of parameter k, empirical results show that using k ≥ 2 does not help improve the performance, and note that when k is large enough, all kernels will be reduced to bandlimited norm, i.e., R(λ) = 1 if λ ≤ λk ≤ 1, since the gap between eigenvalues shrinks.
H PROOF OF THEOREM 5
Proof. Let ξ denote the random label noise which flips a 1 to 0 with rate ρ, assume that the sample s = y + ξ is observed from y under noise ξ, then for a graph spectral filter Hϕ = (I +R(Ł)/ϕ)−1 with positive ϕ > 0, we have
E [ MSE(y, ŷ) ] = 1
n E ‖ y −Hϕ(y + ξ) ‖2
≤ 1 n E ‖ Hϕξ ‖2 + 1 n ‖ (I−Hϕ)y ‖2, (27)
where the last inequality holds due to the triangular property of matrix norm.
To bound E ‖ Hϕξ ‖2, let Cn = R1/2(ω) ‖ y ‖, then
E ‖ Hϕξ ‖2 (a) = ∑ y(v)=1 ρ(Hϕ,(∗,v) ×−1)2 + (1− ρ)(Hϕ,(∗,v) × 0)2
= ρ ∑
y(v)=1
(Hϕ,(∗,v)y(v)) 2 = ρ ‖ Hϕy ‖2
(b) ≤ sup ‖R1/2(Ł)y‖≤Cn ρ ‖ Hϕy ‖2= sup ‖z‖≤Cn ρ ‖ HϕR−1/2(Ł)z ‖2
= ρC2nσ 2 max ( HϕR −1/2(Ł) )
= ρC2n max l=1,...,n
1 (1 +R(λl)/ϕ)2 1 R(λl)
≤ ρϕ 2C2n
R(λ1)(ϕ+R(λ1))2 , (28)
where (a) follows the definition of the flip random noise ξ and (b) holds to the fact that y is in the Paley-Wiener space PWω(G). As for the second term,
‖ (I−Hϕ)y ‖2 ≤ sup ‖R1/2(Ł)y‖≤Cn ‖ (I−Hϕ)y ‖2
(a) = sup ‖z‖≤Cn ‖ (I−Hϕ)R−1/2(Ł)z ‖2
= C2nσ 2 max ( (I−Hϕ)R−1/2(Ł) ) = C2n max
l=1,...,n
( 1− 1
1 +R(λl)/ϕ )2 1 R(λl)
= C2n ϕ max l=1,...,n
R(λl)/ϕ
(R(λl)/ϕ+ 1)2
(b) ≤ C 2 n
4ϕ . (29)
where (a) holds due to the fact that the eigenvectors of I−Hϕ are the eigenvectors of R(Ł); and (b) follows the simple upper bound x/(1 + x)2 ≤ 1/4 for x ≥ 0. By combing everything together, we conclude the result
E [ MSE(y, ŷ) ] ≤ C 2 n
n ( ρϕ2 R(λ1)(ϕ+R(λ1))2 + 1 4ϕ ) . (30)
H.1 EXTRA DISCUSSION
Choosing ϕ to balance the two terms on the right-hand side above gives ϕ∗ = ∞ for ρ < 1/8 and 1 +R(λ1)/ϕ ∗ = 2ρ1/3 for ρ ≥ 1/8. Plugging in this choice, we have the upper bound if ρ ≥ 18
E [ MSE(y, ŷ) ] ≤ C 2 n
4R(λ1)n (3ρ1/3 − 1), (31)
and if ρ < 18 , then the upper bound is
E [ MSE(y, ŷ) ] ≤ C 2 nρ
4R(λ1)n . (32)
This result implies that we can use a large ϕ to obtain accurate reconstruction when the flip rate ρ is not greater than 1/8, and ϕ need to be carefully tuned when the flip rate ρ is greater than 1/8.
I PROOF OF PROPOSITION 6
As below we present the proof in a Bayesian framework, and the reader is referred to (Maybeck, 1982) for a geometrical interpretation of Monte Carlo estimate statistics.
Proof of the minimal variance
To minimize the estimate variance, we need to minimize the main diagonal of the covariance Pnew: trace ( Pnew ) = trace ( (I−K)P̄new(I−K)> + KΣµK> ) .
Then, we differentiate the trace of Pnew with respect to K d trace ( Pnew ) d K = trace ( 2KP̄new − 2P̄new ) + trace ( 2KΣu ) .
The optimal K which minimizes the variance should satisfy d trace(Pnew)/d K = 0, then it gives
K(I + P̄new) = P̄new.
This implies that the variance of estimate x̂new is minimized when K = P̄new(I + P̄new)−.
Proof of the unbiasedness
Suppose that the obsolete estimate x̂ is unbiased, i.e. Ex̂ = x, then using Eq. (11) we have
E ( x̄new ) = E ( x̂ + F∆s ) = x + F∆s = xnew.
Because of Eq. (12) and the measurement noise ν has zero mean, it gives E ( znew ) = E ( xnew + ν ) = xnew.
Putting everything together, we conclude the following result E ( x̂new ) = E ( x̄new + K(znew − x̄new) ) = xnew + K(xnew − xnew) = xnew. (33)
This implies that the estimate state x̂new is unbiased.
J IMPLEMENTATION DETAILS
In this section, we present the details for our implementation in Section 5 including the additional dataset details, evaluation protocols, model architectures in order for reproducibility. All the experiments are conducted on the machines with Xeon 3175X CPU, 128G memory and P40 GPU with 24 GB memory. The configurations of our environments and packages are listed below:
• Ubuntu 16.04 • CUDA 10.2 • Python 3.7 • Tensorflow 1.15.3 • Pytorch 1.10 • DGL 0.7.1 • NumPy 1.19.0 with MKL Intel
J.1 ADDITIONAL DATASET DETAILS
We use three real-world datasets which are processed in line with (Liang et al., 2018; Steck, 2019): (1) for Koubei2, we keep users with at least 5 records and items that have been purchased by at least 100 users; and (2) for Tmall3, we keep users who click at least 10 items and items which have been seen by at least 200 users; and (3) for Netflix4, we keep all of the users and items. In addition, we chose the random seed as 9876 when splitting the users into training/validation/test sets.
2https://tianchi.aliyun.com/dataset/dataDetail?dataId=53 3https://tianchi.aliyun.com/dataset/dataDetail?dataId=35680 4https://kaggle.com/netflix-inc/netflix-prize-data
J.2 EVALUATION PROTOCOLS
In Figure 5, we illustrate the difference between the transductive ranking and inductive ranking evaluation protocols. In the transductive ranking problem, the model performance is evaluated on the users already known during the model training, whereas the model performance is evaluated on the unseen users in the inductive ranking problems. It is worth noting that in the testing phrase, we sort all interactions of the validation/test users in chronological order, holding out the last one interaction for testing and inductively generating necessary representations on the rest data. In a nutshell, we evaluate our approach and the baselines for the challenging inductive next-item prediction problem.
J.3 EVALUATION METRICS
We adopt hit-rate (HR) and normalized discounted cumulative gain (NDCG) to evaluate the model performance. Suppose that the model provideN recommended items for user u asRu, let Tu denote the interacted items of the user, then HR is computed as follows:
HR@N = Eu 1|Tu∩Ru| (34)
where 1|Ω| is equal to 1 if set Ω is not empty and is equal to 0 otherwise. NDCG evaluates ranking performance by taking the positions of correct items into consideration:
NDCG@N = 1
Z DCG@N =
1
Z N∑ j=1 21|R j u∩Tu| − 1 log2(j + 1) (35)
where Z is the normalized constant that represents the maximum values of DCG@N for Tu.
J.4 GRAPH LAPLACIAN
Let R denote the item-user rating matrix, Dv and De denotes the diagonal degree matrix of vertices and edges respectively, then graph Laplacian matrix used in our experiments is defined as follows:
Ł = I−D−1/2v RD−e R>D−1/2v . (36)
where I is identity matrix.
J.5 DISCUSSION ON PREDICTION FUNCTIONS
In experiments, we focus on making personalized recommendations to the users, so that we are interested in the ranks of the items for each user. Specifically, for top-k ranking problem we choose the items with the k-largest predicted ratings,
Recommendation@k = max |O|=k ∑ v∈O,v/∈Ω+ y(v). (37)
More importantly, our proposed method is also suitable for the link prediction problem, where the goal is classify whether an edge between two vertices exists or not. This can be done by choosing a splitting point to partition the candidate edges into two parts. There are many different ways of choosing such splitting point. One can select the optimal splitting point based on the ROC or AUC results on the validation set.
J.6 MODEL ARCHITECTURES
As mentioned before, we equip IDCF (Wu et al., 2021) with different GNN architectures as the backbone. Here we introduce the details for them.
GAT. We use the GATConv layer available in DGL for implementation. The detailed architecture description is as below:
• A sequence of one-layer GATConv with four heads. • Add self-loop and use batch normalization for graph convolution in each layer.
• Use tanh as the activation. • Use inner product between user embedding and item embedding as ranking score.
GraphSAGE. We use the SAGEConv layer available in DGL for implementation. The detailed architecture description is as below:
• A sequence of two-layer SAGEConv. • Add self-loop and use batch normalization for graph convolution in each layer. • Use ReLU as the activation. • Use inner product between user embedding and item embedding as ranking score.
SGC. We use the SGConv layer available in DGL for implementation. The detailed architecture description is as below:
• One-layer SGConv with two hops. • Add self-loop and use batch normalization for graph convolution in each layer. • Use ReLU as the activation. • Use inner product between user embedding and item embedding as ranking score.
ChebyNet. We use the ChebConv layer available in DGL for implementation. The detailed architecture description is as below:
• One-layer ChebConv with two hops. • Add self-loop and use batch normalization for graph convolution in each layer. • Use ReLU as the activation. • Use inner product between user embedding and item embedding as ranking score.
ARMA. We use the ARMAConv layer available in DGL for implementation. The detailed architecture description is as below:
• One-layer ARMAConv with two hops. • Add self-loop and use batch normalization for graph convolution in each layer. • Use tanh as the activation. • Use inner product between user embedding and item embedding as ranking score.
We also summarize the implementation details of the compared sequential baselines as follows.
SASREC.5 We use the software provided by the authors for experiments. The detailed architecture description is as below:
• A sequence of two-block Transformer with one head. • Use maximum sequence length to 30. • Use inner product between user embedding and item embedding as ranking score.
BERT4REC.6 We use the software provided by the authors for experiments. The detailed architecture description is as below:
• A sequence of three-block Transformer with eight heads. • Use maximum sequence length to 30 with the masked probability 0.2. • Use inner product between user embedding and item embedding as ranking score.
5https://github.com/kang205/SASRec 6https://github.com/FeiSun/BERT4Rec
GREC.7 We use the software provided by the authors for experiments. The detailed architecture description is as below:
• A sequence of six-layer dilated CNN with degree 1, 2, 2, 4, 4, 8. • Use maximum sequence length to 30 with the masked probability 0.2. • Use inner product between user embedding and item embedding as ranking score.
7https://github.com/fajieyuan/WWW2020-grec | 1. What is the main contribution of the paper regarding one-bit completion?
2. What are the strengths and weaknesses of the proposed algorithms, particularly in terms of scalability, explainability, and performance?
3. Do you have any questions or concerns about the presentation and clarity of the paper, such as the link between M, R, y, and s, the noise model, the setup of BGS-IMC, and the experimental setup?
4. How does the reviewer assess the novelty, clarity, quality, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies the problem of one-bit completion: the task is to recover a vector
y
∈
{
0
,
1
}
n
from the observation of a subset of its positive entries. The model assume that there exists a matrix
R
∈
{
0
,
1
}
m
×
n
containing additional information about the model, e.g.
m
previous partial observations of similar data.
The main idea is that this matrix
R
can be turned into a graph on
n
vertices, whose weights represent the similarity between vertices. The completion problem can thus be cast as a graph signal sampling problem using the laplacian
L
of the graph: the general assumption is that
y
is mostly concentrated on the low-frequency eigenvectors of
L
.
Next, the authors study the on-line version of this problem, where the observations can be noisy. In this case, they propose a Bayesian algorithm that assumes the noise is Gaussian in the Fourier basis of
L
. This is an adaptation of prediction-correction-update algorithms to the case of (0, 1) observations.
Finally, they propose several experiments to gauge the performance of their algorithms. The first part compares it to other graph-based methods, and the second part to more complicated (e.g. transformer-based) sequential recommendation algorithms. In the first case, the Bayesian version BGS-IMC is shown to mostly outperform its counterparts, while in the second case, the performance is comparable (sometimes slightly worse) than other architectures but the runtime is much lower.
Strengths And Weaknesses
The main strengths of the proposed algorithms are their scalability and explainability: since they are fairly simple to implement (the first one even has a closed-form solution), they are much faster than heavier neural network counterparts; and on the other hand even the learned elements of the algorithms (such as the covariance matrices of BGS-IMC) can have interesting interpretations. The algorithms are quite thoroughly studied, with ablation and scalability studies performed in the appendix.
On the other hand, the paper is very hermetic for people that are not familiar with the setting and state of the art; it took me quite a long time to grasp the exact setting of the original problem. The introduction to BGS-IMC suffers the same problem : it is hard to understand exactly what
Δ
s
is, and why the model is setup that way. Overall, this paper needs significant work to make it accessible beyond non-experts of the field.
Clarity, Quality, Novelty And Reproducibility
Novelty: the GS-IMC algorithm seems to be a straightforward generalization of SGMC (Chen et al.), with very similar performance. There are however some novel consistency results (one on the effect of bandwidth limiting, and the other on the effect of noise). To the best of my knowledge, the BGS-IMC is completely novel in one-bit completion, and is the main innovation of the paper.
Clarity/Reproducibility: here are some more precise remarks on the clarity issues
Model definition: the presentation at the beginning of the paper is very confusing; the link between
M
,
R
,
y
and
s
is hard to grasp, and would benefit from a much more formal presentation similar to the one of graph sampling. In particular, the noise model is unclear : are bits only flipped from 1 to 0, or can there be opposite flips ?
BGS-IMC : the setup is again slightly confusing. I don't exactly understand why you introduce two variables
x
and
z
, when the difference between them is simply some additive noise; in general I feel like Equations (11) and (12) need more explanation. On the other hand, once we accept the model, the rest is fairly straightforward and explained well.
Experiments: again, the experimental setup may need more explanation; the metrics in Tables 3 and 4 are not defined, especially the @50 or @100 suffixes. I also didn't see which regularizer
R
was used to produce the results in the tables. However, I really appreciate the effort to made the algorithms compared against as efficient as possible.
Minor remarks*:
Figure 2 (and 4-6): the algorithm names are wrong
throughout the paper (mainly in section 4, see e.g. Eq. 18 and below Eq. 15),
A
−
instead of
A
−
1
is used for inverting matrices; or is that another operation ? |
ICLR | Title
Decoupled Greedy Learning of Graph Neural Networks
Abstract
Graph Neural Networks (GNNs) become very popular for graph-related applications due to their superior performance. However, they have been shown to be computationally expensive in large scale settings, because their produced node embeddings have to be computed recursively, which scales exponentially with the number of layers. To address this issue, several sampling-based methods have recently been proposed to perform training on a subset of nodes while maintaining the fidelity of the trained model. In this work, we introduce a decoupled greedy learning method for GNNs (DGL-GNN) that, instead of sampling the input graph, decouples the GNN into smaller modules and associates each module with greedy auxiliary objectives. Our approach allows GNN layers to be updated during the training process without waiting for feedback from successor layers, thus making parallel GNN training possible. Our method achieves improved efficiency without significantly compromising model performances, which would be important for time or memory limited applications. Further, we propose a lazy-update scheme during training to further improve its efficiency. We empirically analyse our proposed DGL-GNN model, and demonstrate its effectiveness and superior efficiency through a range of experiments. Compared to the sampling-based acceleration, our model is more stable, and we do not have to trade-off between efficiency and accuracy. Finally, we note that while here we focus on comparing the decoupled approach as an alternative to other methods, it can also be regarded as complementary, for example, to sampling and other scalability-enhancing improvements of GNN training.
1 INTRODUCTION
Graph Neural Networks (GNN) have been shown to be highly effective in graph-related tasks, such as node classification (Kipf & Welling, 2016), graph classification (Ying et al., 2018b), graph matching (Bai et al., 2019), and recommender system (Ying et al., 2018a). Given a graph of arbitrary size and attributes, GNNs obtain informative node embeddings by first conducting a graph convolution operation to aggregate information from the neighbors of each node, and then transforming the aggregated information. As a result, GNNs can fuse together the topological structure and node features of a graph, and have thus became dominant models for graph-based applications.
Despite its superior representation power, the graph convolution operation has been shown to be expensive when GNNs become deep and wide (Chen et al., 2017). Therefore, training a deep GNN model is challenging for large and dense graphs. Since deep and wide GNNs are becoming increasingly important with the emergence of classification tasks on large graphs, such as the newly proposed OGB datasets (Hu et al., 2020), and semantic segmentation tasks as introduced in (Li et al., 2019), we focus here on studying methods for alleviating computational burdens associated with large-scale GNN training.
Several strategies have been proposed during the past years to alleviate this computation issue of large-scale GNNs. GraphSAGE (Hamilton et al., 2017) took the first step to leverage a neighborhood sampling strategy for GNNs training, which only aggregates a sampled subset of neighbors of each node in the graph convolution operation. However, though this sampling method helps reduce memory and time cost for shallow GNNs, it computes the representation of a node recursively, and the node’s receptive field grows exponentially with the number of GNN layers, which may make
the memory and time cost even goes larger for deeper GNNs when the sample number is big. The work of Chen et al. (2017; 2018); Zou et al. (2019) developed sampling-based stochastic training methods to train GNNs more efficiently and avoid this exponential growth problem. Chiang et al. (2019) proposed a batch learning algorithm by exploiting the graph clustering structure. Beyond the aforementioned methods, recently, You et al. (2020) proposed a layer-wise sequential training algorithm for GNNs, which decouples the aggregation and transformation operations in the per-layer feed-forward process and reduces the time and memory cost during training while not sacrificing too much model capability, this indicates that the GNN layers do not have to be learned jointly. However, the sequential training would bring some inefficiency.
In addition to the inefficiency brought by the graph convolution operation, as discussed in (Belilovsky et al., 2019a), the sequential nature of standard backpropagation also leads to inefficiency. As pointed out in (Jaderberg et al., 2017), backpropagation for deep neural networks suffers an update-locking problem, which means each layer heavily relies on upper layers’ feedback to update itself, and thus, it must wait for the information to propagate through the whole network before updating. This would be a great obstacle for GNN layers to be trained in parallel to alleviate computation pressure under time and memory constraint, and would prohibit the GNN training to be trained in an asynchronous setting.
In this work, using semi-supervised node classification as an example, we show that the greedy learning would help to decouple the optimization of each layer in GNNs and enable GNNs to achieve update-unlocking, i.e., allow the GNN layers to update without getting any feedback from the later layers. By using this decoupled greedy learning for GNNs, we can achieve parallelization of the network layers, which would make the model training much more efficient and would be very important for time or memory limited applications. Moreover, we propose to use a lazy-update scheme during training, which is to exchange information between layers after a certain number of epochs instead of every epoch, this will further improve the efficiency while not sacrificing much performance. We theoretically analyze the computation complexity of our proposed method, and analogue our method to the classic block coordinate descent optimization to enable further analysis. We run a set of experiments to justify our model, and show its great efficiency on all benchmark datasets. On the newly proposed large OGBN-arxiv dataset, when training a 7-layer model, our proposed method even saves 85% time and 66% per-GPU memory cost of the conventionally trained GCN.
Our main contributions can be summarized as follows. First, we introduce a decoupled greedy learning algorithm for GNNs that achieves update-unlocking and enables GNN layer to be trained in parallel. Next, we propose to leverage a lazy-update scheme to improve the training efficiency. We evaluate our proposed training strategy thoroughly on benchmark datasets, and demonstrate it has superior efficiency while not sacrificing much performance. Finally, our method is not limited to the GCN and the node classification task, but can be combined with other scalability-enhancing GNNs and can be applied to other graph-related tasks.
2 RELATED WORK
Before discussing our proposed approach, we review here related work on efficient training strategies for GNNs. The computational complexities the discussed methods are summarized in Table 1, and we refer the reader to Appendix A for detailed computation.
2.1 DEEP GRAPH CONVOLUTIONAL NETWORK (DEEPGCN)
Graph convolutional network (GCN, Kipf & Welling, 2016) is one of the most popular models for graph-related tasks. Given an undirected graph G with node feature matrix X ∈ RN×D and adjacency matrix A ∈ RN×N where N is node number and D is feature dimension, let à = A+ I , D̃ be a diagonal matrix satisfying D̃i,i = ∑N j=1 Ãi,j , and F = D̃
−1/2ÃD̃−1/2 be the normalized Ã, then, the l-th GCN layer will have the output H(l) as H(l) = σ(FH(l−1)W (l)), where σ is the non-linear transformation, and W (l) is the trainable weight matrix at layer l.
As pointed out in Li et al. (2018), when GCN becomes deep, it will suffer severe over-smoothing problem, which mean the nodes will become not distinguishable after stacking too many network layers. However, for applications such as semantic segmentation (Li et al., 2019) or classification
tasks on large datasets (Hu et al., 2020), we do need deeper GCN models. Therefore, we follow the work of Li et al. (2019), alleviating over-smoothing problem by adding residual links between GCN layers and obtain the deepGCN model. The l-th layer of our network model will be H(l) = σ(FH(l−1)W (l)) + H(l−1).
2.2 EFFICIENT GNN TRAINING
To alleviate the expensive computation issue of GNN introduced in previous section, a lot of literature has proposed sampling-based batch-learning algorithms to train GNNs more efficiently.
GraphSAGE (Hamilton et al., 2017) introduced a node sampling strategy (NS), which is to randomly sample s neighbors for each node at each layer, then, for each node, instead of aggregating embeddings of all its neighbors, we only aggregate the sampled ones. VRGCN (Chen et al., 2017) also followed this NS strategy, but it further proposed to leverage history activation to reduce the variance of the estimator. Though NS scheme has smaller complexity compared to full-batch GNN, there exists redundant computation and the complexity grows exponentially with the layer number.
Layer-wise importance sampling strategy (IS) would be a more advanced method for efficient GNN training. FastGCN (Chen et al., 2018) proposed to sample nodes for each layer with a degree-based sampling probability in order to solve the scalability issue in NS. The work of LADIES (Zou et al., 2019) leveraged IS idea as well, but it proposed a layer-dependent importance sampling scheme, which enjoys a smaller variance while maintaining same level complexity as FastGCN. Though this IS is better than NS in general, but we may still have to trade-off the complexity with the performance (i.e. accuracy for classification tasks), since using a large sample number would be helpful for the performance and increase the computation cost and vice versa.
Except for ther aforementioned methods, we also has ClusterGCN (Chiang et al., 2019), which proposes to partition the graph into several clusters then randomly select multiple clusters to form a batch to train the GNN. Though this would allows us to train much deeper GCN without much time and memory overhead, the stability of the performance of this approach would be hard to guarantee, since the performance would heavily depends on the graph clustering settings.
2.3 LAYER-WISE GNN
Layerwise learning for neural networks is first introduced by the work of (Hinton et al., 2006), and was further discussed in (Bengio et al., 2007). The work of (Belilovsky et al., 2019a;b) explored the layerwise CNNs and achieved impressive results.
Recently, (You et al., 2020) proposed a layerwise algorithm for GNN training. The key idea is to train GNNs layer by layer sequentially. Figure 1 illustrates the sequential training framework for layerwise GNN. For a L-layer GNN, we first train its first layer with an auxiliary classifier, then after it get fully converged, we fix this layer, and start to optimize next layer, we do the same things for all L layers. This layerwise training saves us a lot of memory, since this method only requires us
to focus on one layer and only need to store one layer’s activation results. Besides the clear memory saving, this layerwise training scheme also saves us a lot of time. During the learning process, it can decouple the two key components in the per-layer feed-forward graph convolution: aggregation and transformation. Then for each layer, it only needs to conduct the aggregation once at the beginning of the training, and then only need to do the transformation step at each iteration, which greatly reduce the time cost. According to the reported results, this sequentially-trained layerwise method is efficient than the joint learning strategy and can get us good performance. However, its sequential scheme would bring some inefficiency because one layer has to wait until its previous layers to get fully converged to start training. In our work, we solve this problem and enable the model parallelization to further improve the effciency.
3 PROPOSED APPROACH
3.1 MODEL ARCHITECTURE
As mentioned in section 2.1, we introduce our proposed algorithm with deepGCN model (i.e., the GCN model with residual link) since the residual link would help to alleviate GCN’s oversmoothing problem when it goes deep, which would be important for large scale scenarios.
There exist two ways to train such GCN model: conventional training and sequential layerwise training. We illustrate these two strategies with the high-level framework shown in Figure 1. For conventional training, we jointly optimize the learnable parameters in all layers and in the classifier. For layerwise training, we break the training for a L-layer GNN into L sequential stages, each stage has to wait all its previous layers to get fully converged to start training.
Note that, the sequential layerwise training has the advantage that it can save time and memory while not compromising too much performance,
this suggests its promising applications in large scale models under hardware and time constraints. We now consider, whether we can extend it to a parallel version, so that the efficiency can be further improved? Interestingly, as shown in the following sections, we find the answer is affirmative.
3.2 DECOUPLED GREEDY LEARNING ALGORITHM
To enable parallel GNN training, the most challenging problem is update-locking. Before updating one layer, we have to wait after the signal has been passed through all its successors, which would bring inefficiency. To alleviate this problem, we follow the design of layerwise GNN: decoupling the GNN model into different layers, associating each layer with an auxiliary classifier, which is a MLP layer with softmax activation, and assigning a per-layer greedy objective. Then, with the output activation of a given layer, we can leverage the auxiliary classifier to optimize the per-layer objective and therefore can update the current layer without any feedback from its successors while the rest lay-
Algorithm 1 Decoupled Greedy Learning (DGL) of GNNs Require: Normalized Adjacency Matrix F ; Feature Matrix X; Labels Y ; Total Number of Itera-
tions T ; Total Number of Layers L. 1: Initialize: H(0) = X; 2: for t = 1 to T do 3: for l = 1 to L do 4: H(l) = σ(FH(l−1)W (l)) // Get node embeddings and store them as H(l). 5: (W (l),Θ(l))← Update with∇loss(W (l),Θ(l))(Y ,H(l−1),F ;W (l),Θ(l)) // Update parameters. 6: end for 7: end for
Algorithm 2 Decoupled Greedy Learning (DGL) of GNNs with Lazy Update Scheme Require: Normalized Adjacency Matrix F ; Feature Matrix X; Labels Y ; Total Number of Itera-
tions T ; Total Number of Layers L; Waiting time Tlazy. 1: Initialize: Ĥ(0) = FX; 2: for t = 1 to T do 3: for l = 1 to L do 4: H(l) =σ(Ĥ(l−1)W (l)) // Get node embeddings. 5: (W (l),Θ(l))←Update with∇loss(W (l),Θ(l))(Y , Ĥ(l−1);W (l),Θ(l)) //Update parameters. 6: if (t mod Tlazy == 0) then 7: Ĥ(l) = FH(l) // Get propagated node embeddings and store them as Ĥ(l). 8: end if 9: end for
10: end for
ers are still in the forward process. We name our training strategy as Decoupled Greedy Learning of GNNs (DGL-GNN).
With our DGL-GNN, we achieve update-unlocking, and therefore can enable parallel training for layerwise GNNs. For clarity, we provide Figure 2 to compare the signal propagation process of the conventionally trained GNN, sequentially trained layerwise GNN, and the parallel trained GNN. With this illustration, we can observe that the parallel training of layerwise GNN can avoid the case in which one layer is forwarding or back-propagating the signal while other layers are idle. Therefore, that given same number of batches of data, the parallel version would finish training much earlier than the conventional and the sequential version.
Following our notations in section 2, we denote by F the normalized adjacency matrix, H(l) the output activation of l-th layer, and W (l) the learnable parameters for l-th layer. Plus, let Y be the labels, Θ(l) be the parameters for l-th layer’s classifier, and loss be the cross-entropy loss which is frequently used for classification tasks. Then, we have the per-layer objective function: loss(W (l),Θ(l))(Y ,H(l−1),F ;W (l),Θ(l)). We now formally define our DGL-GNN training method in algorithm 1. Note that, the inner for-loop can be done in a parallel manner, i.e., when the l−th layer is working on the backward process as given in line 5, the (l + 1)−th layer can start forward propagation as given in line 4. Therefore, we claim our DGL-GNN algorithm can achieve update-unlocking.
We then empirically observed that, without passing the signal to next layer immediately after the forward process, we still get same-level performance. Thus, we find that the efficiency of DGLGNN can be further improved by leveraging an Lazy Update scheme (LU-DGL-GNN). Instead of using the up-to-date activation output from its predecessor, one layer can use the history activation to learn its parameters and only update the history activation a few times during the overall training process. Then, same as sequential trained layerwise GNN, we only need to conduct the aggregation step for one time after each update, this saves us a lot time. We denote by Ĥ(l) the aggregated stored history activation for layer l. We now formally define the LU-DGL-GNN method in algorithm 2, we marked its difference with DGL-GNN in blue.
To sum-up, our proposed DGL-GNN and LU-DGL-GNN methods enjoys a very high efficiency because we introduce an auxiliary greedy objective for each layer and thus achieve update-unlocking, we then decouple the model into layers and therefore enable the model to be trained in parallel, and we finally propose to leverage the lazy-update scheme, with which we can avoid redundant computation in the aggregation step and further reduce the training time.
4 COMPLEXITY ANALYSIS
As shown in Table 1, our proposed methods achieve a lower complexity compared to the conventional training and other baselines. Note that DGL-GCN can be regard as a special case in which Twait = 1, i.e. we update the stored activation every epoch. Therefore, we focus on the complexity justification for LU-DGL-GCN.
For time complexity, first, we know that the training process consists two fundamental operations: aggregation and transformation. The time complexity of aggregation is O(‖A‖0K), and the time complexity of the transformation step is O(NK2). Then, we note that, for LU-DGL-GCN, during the full learning process, for each layer, we have to do aggregation T/Twait times, and we have to do the transformation 2T times because this step should be conducted for both the GNN layer and the auxiliary classifier. Since the computation for each layer is done in parallel, we know that the overall time complexity for LU-DGL-GCN should be O(T‖A‖0K/Twait + 2TNK2). In practice, if we put different layers on different GPUs and do the training in parallel, there would be some extra non-negligible time cost for GPU communication.
For memory complexity, it also consists two components. We have to store two things for each layer during the training: the history activation and the intermediate learnable weight matrices for GNN and for the auxiliary classifier. The activation takes O(NK), and the two types of weight matrix take O(2K2) space. Again, when we do the training in a parallel fashion and assign the layers to different machines, the per-GPU memory would only be O(NK + 2K2), which is significantly reduced compare to most of the existing baselines.
5 ANALOGY TO BLOCK COORDINATE DESCENT
To justify the rationality of the proposed model, we present here an analogy of our decoupling approach to the classic Block Coordinate Descent (BCD) optimization strategy and its variants (Wright, 2015; Wu et al., 2008; Shi et al., 2016), which for completeness are discussed in Appendix A.2. We observe that, the sequential layerwise GNN share similar high-level idea with the BCD method. We regard each layer and its associated auxiliary classifier as a module. Then, for each module, all its parameters can be treated as a coordinate block, we order the coordinate block according to which layer it corresponds to. Note that, in BCD, for each iteration, we choose one coordinate block and optimize the overall training objective with respect to the chosen block. So if we keep choosing the first coordinate block until it fully converged, then keep choosing the second coordinate block, etc., until the last coordinate block fully converge, then this optimization process is the same as the learning process of a sequentially trained layerwise GNN.
We also oberve that, the DGL-GCN can be analog to the synchronous parallel BCD and the LUDGL-GCN can be regard as an analogy of asynchronous parallel BCD. Note that our DGL-GCN and LU-DGL-GCN can be implemented in a parallel fashion, and their key difference is whether all the layers share a consistency and up-to-date information. Therefore, if we make the same analogy of learnable parameters and the coordinate blocks as in the above sequential version, then it would be easy to find the similarity between parallel BCD and our decoupled greedy learning methods. With such analogy, it would allow us to leverage existing theorems for BCD optimization to better understand and analyze the DGL-GCN and LU-DGL-GCN.
6 EMPIRICAL RESULTS
We evaluate our proposed algorithms with the multi-class node classification task. However, it should be noted that the decoupled greedy learning method can also be applied in other graph-related tasks and is not limited to node classification.
We use the following four public datasets for evaluation: cora, citeseer, pubmed (Sen et al., 2008), and OGBN-arxiv (Hu et al., 2020). We briefly introduce these datasets and summarize their statistic in Appendix A.3. We compare our method against several baseline models introduced in Section 2: Full-Batch GCN (Kipf & Welling, 2016), FastGCN (Chen et al., 2018), LADIES (Zou et al., 2019), and LGCN (You et al., 2020). For all the methods, we use the same deepGCN model architecture and set all the hidden-dimension as 128. We follow the public implementations of all the baselines, and use their parameter settings. We conduct the training for 10 times and take the mean and variance of the evaluation results. Further implementation details are provided in Appendix A.3.
We evaluate the performance of different methods with the following evaluation metrics: Accuracy (%): The micro F1-score of the test data at the convergence point. Memory (MiB): The maximum per-GPU memory cost during training. Total Running Time (s): The total training time (exclude validation) before convergence.
We summarize the classification performance results in Table 6, which demonstrates the efficacy of our approach. We can see that, for small-scale datasets such as Cora, Citeseer, Pubmed, our LU-DGL-GCN is the fastest among all the baselines, and its accuracy is only lower than FullBatch GCN but higher than all the other efficient GCN training strategies. Since we only use a 2-layer GCN architecture for these small-scale datasets, we can not see much memory advantage. For the large-scale OGBN-arxiv dataset, we use a deeper 7-layer GCN architecture, then we can find that, compared to Full-Batch GCN, our LU-DGL-GCN is much faster and can save per-GPU memory while only sacrifice minor performance. Compared to sampling-base methods (LADIES and FastGCN), our LU-DGL-GCN has a better accuracy and is faster, while only need slightly more per-GPU memory. Note that for here, we set the sample number as 64 for FastGCN and LADIES, if we increase the sample number, we will obtain a higher accuracy but the time cost and memory cost would be larger. Compared to LGCN which is a sequantially-trained layerwise GCN, our LU-DGLGCN has a clear advantage on the time cost, and can even obtain a little accuracy improvement. Our experimental results together with our analysis in previous sections indicate that our LU-DGLGNN would be very helpful for time and memory limited applications, especially for the scenarios in which we need deep models. Furthermore, to establish the importance of different components of our method, we isolate certain aspects of it via the following ablation study.
Importance of Lazy Update Scheme. We illustrate the advantage of the Lazy Update Scheme and show how the waiting time Tlazy will influence the performance. We use the cora and OGBN-arxiv datasets as examples for small-size and large-size scenarios, and follow previous model architectures and parameter. We run the model for 200 epochs and compare the obtained accuracy, total running time, and memory. We summarize the results in Table 6. We find that, with the lazy update scheme, we can greatly reduce the time cost, and a proper waiting time Tlazy would help to improve the accuracy a little bit since it can alleviate overfitting. In addition, when comparing the accuracy curves of different Tlayer shown in Figure 3, we find that large Tlazy makes the training more stable.
Sequential Training v.s. Parallel Training. Finally, we briefly compare the sequential training and our parallel training of the greedy objective. Again, we use cora and OGBN-arxiv as examples and follow the above experiment settings. As shown in Figure 3, we find that in terms of accuracy, parallel training can quickly catch up with the sequential training, and is less likely to overfit.
7 CONCLUSIONS
In this paper, we focus on the efficiency issue of GNN training in large-scale applications and present a decoupled greedy GNN learning strategy. Our proposed DGL-GNN model achieves updateunlocking by introducing greedy auxiliary objectives during training, and enables parallelization by decoupling the GNN into smaller modules. We also propose the LU-DGL-GNN method, which leverages a lazy-update scheme during training to further improve the model efficiency. We empirically analyze our proposed DGL-GNN model and demonstrate its effectiveness and superior efficiency through a range of experiments. We note that while we introduce our proposed method with deepGCN model and use the semi-supervised node classification task as an example, the DGL-GNN and LU-DGL-GNN methods are not limited to this setting, and can be applied to other GNNs and graph-related downstream tasks. Further, while here we focus on comparing the decoupled approach as an alternative to other sampling-based methods respect to their accuracy and efficiency, these approaches can be regarded as complementary to each other. By combining the decoupled greedy learning method with other scalability-enhancing improvements of GNN training, the computation cost would be further reduced, which poses a promising direction for future work.
A APPENDIX
A.1 COMPLEXITY ANALYSIS FOR BASELINES
In this section, we explain how we compute the memory and time complexity in Table 1.
All the aforementioned methods’s memory cost consists two parts: intermediate embedding matrices storage and weight matrices storage. The weight matrices always needO(LK2) since it has to store L weight matrices with dimension K ×K. The intermediate embedding has different memory cost for different methods. In terms of time complexity, it also consists two parts: the aggregation time cost and transformation time cost. These two parts varies for different methods.
Full-batch GCN stores all the intermediate embedding matrices for all the L layers, and each matrice has N nodes with dimension K, so its memory complexity for intermediate embedding storage would be O(LNK). Therefore, its total memory complexity is O(LNK + LK2). In terms of time complexity, the propagation step which is a sparse-matrix multiplication has time complexity O(‖A‖0K), and the transformation step which is a dense-matrix multiplication has time complexity O(NK2). During the training for L layers and T iterations, the total time complexity would be O(TL‖A‖0K + TLNK2).
For GraphSAGE, it has to storeO(bsL−1node)K-dimension node embeddings, so it has a total memory complexity O(bKsL−1node + LK2). In terms of time complexity, for each batch, to update one node, we have to updateO(sL−1node) activations, each needsO(K∫\ode) for aggregation and K2 for transformation. Therefore, the total time complexity for GraphSAGE is O(bTKsLnode + bTK2s L−1 node).
For VR-GCN, it stores all historical activations, which takes a memory of O(LNK) and will lead to the total memory complexity O(LNK + LK2). The time complexity can be analyzed similarly as GraphSAGE, which would be O(bT D̄KsL−1node + bK2s L−1 node).
For FastGCN, it only has to store b node embeddings in the last layer, and (L − 1)slayer node embeddings in the previous L− 1 layers, so the total memory complexity would be O(bK + (L− 1)Kslayer) +LK
2). In terms of time complexity, we need O(bTKslayer + (L− 1)TKs2layer) for aggregation andO((bTK2+(L−1)Tslayer)K2) for transformation. Note that, we have b < slayer, so we ignore the relatively small terms and get the total memory complexity O(LKslayer + LK2), and total time complexity O(TLKs2layer + TLK2slayer).
For LADIES, its is also a layer-wise sampling method. Though it has different sampling strategy as FastGCN, their memory cost and time cost are the same. There fore, LADIES also has memory complexity O(LKslayer + LK2) and time complexity and O(TLKs2layer + TLK2slayer).
A.2 SUPPLEMENTARY DISCUSSION OF COORDINATE AND BLOCK COORDINATE DESCENT
Coordinate descent (CD) is a classic iterative optimization algorithm that solves an optimization problem by approximately minimizing the objective along each coordinate directions successively.
In each iteration, it would choose one variable, fix the other components, then optimizing the objective with respect to only the single variable. By doing so, we only need to solve a lower dimensional minimization problem at each iteration, which would be easier. CD algorithm has been discussed in various literatures and has been used in applications for a long time (Wright, 2015; Wu et al., 2008; Shi et al., 2016).
Block coordinate descent (BCD) is an extension of CD method. The difference of BCD and the conventional CD is that BCD will do the searching along a coordinate hyperplane instead of a single coordinate direction (Beck & Tetruashvili, 2013), i.e. it groups variables into blocks, and approximately minimizing the objective with respect to only one block of variables at each iteration while fixing the others.
For BCD algorithm, there exists its parallel implementations. As introduced in the work of (Wright, 2015), we can categorize them into two types: synchronous and asynchronous. For synchronous parallel BCD, we partition the computation into pieces and put different pieces on different processors, each processor will update a part of the variables in parallel, then a synchronization step should be conducted to guarantee the consistency of the information shared among all processors before further computation. For asynchronous setting, the difference is we do not have to do the synchronization. As discussed in Section 5, our proposed method can be regarded as analogous to BCD and its aforementioned variants.
A.3 SUPPLEMENTARY FOR EXPERIMENTS
Dataset Statistics We use four benchmark dataset: Cora, Citeseer, Pubmed, Reddit, and OGBNarxiv for the node classification task. The detailed statictics of theses datasets are shown in Table 4.
Hardware We run the experiments on Tesla V100 GPU (16GB).
Baselines Detailed introduction of our baselines can be found in Section 2. As introduced before, as NS methods are in general less competitive than the IS methods, we only compare against the IS strategies.
Model Architecture For Cora, Citeseer, Pubmed, we set the number of layers as 2 for all the methods. For OGBN-arxiv, we set it to be 7.
Parameter Settings For all the methods and datasets, we conduct training for 10 times and record their mean variance. For small datasets (Cora, Citeseer, Pubmed), we set the epoch number as 200, for OGBN-arxiv, we set the epoch number as 500. We choose the model with the best validation performance as convergence point. For the sampling-based methods: FastGCN and LADIES, we set the sample number as 64, increasing this number would improve the accuracy a little bit, but would increase the memory and time cost. | 1. What is the focus of the paper, and what are its contributions to graph neural networks?
2. What are the strengths of the proposed approach, particularly regarding its parallelization and memory savings?
3. What are the weaknesses of the paper, especially regarding its novelty and impact?
4. Do you have any concerns about the auxiliary function used in the proposed approach?
5. How could the experimental results be strengthened, and what additional studies could be conducted?
6. Are there any typos or other issues that need to be addressed in the paper?
7. What could the authors do to improve the paper further? | Review | Review
Summary This paper introduces a method for decoupled layer wise training of graph neural networks. This method has the potential to be parallelized and offers computational and memory savings. The authors provide experimental results on 4 datasets and compare against 4 baselines. Results suggest that the approach achieves comparable results to other methods while being faster by parallelization.
Strengths a) Well written paper with clear notation and illustrations
b) Good analysis of the different algorithms. Lays out complexity and memory costs of related work.
c) Analogy to block coordinate descent intuitively makes sense.
d) Nice contribution towards achieving asynchronous/lazy updates for GNNs.
Weaknesses
Novelty and impact need to be strengthened a) As mentioned by the authors, greedy layerwise pre-training is an old idea. When it first came out for Deep NNs, everyone was excited about it but nowadays hardly anyone does it. So if this method has to make a comeback for GNNs then the benefits have to be very compelling.
b) Wrt benefits, the presentation around why this idea is compelling needs to be concretely laid out. It seems like there are some computational and memory benefits via parallelization. But is the additional complexity of parallel training worth it? As the authors mention, there can be non-trivial communication costs between GPUs in addition to the additional code complexity.
c) Absent theoretical convergence guarantees, the proposed approach is a heuristic as it relies on an auxiliary function that’s somewhat arbitrary. Further discussion is needed on why the chosen auxiliary function is a good idea. Some ablation studies with different auxiliary functions are needed to shed light on this particular choice of auxiliary function.
Experiments can be strengthened a) The results seem to trade off accuracy with speed and saved memory and results are reported on 3 small and one large dataset. To truly claim that this method generalizes, the authors would need to strengthen their results across some different tasks (not just semi-supervised classification) and models (not just GCN).
b) More details are needed regarding the experimental setup. Was there a multi-gpu setup? E.g. OGBN-arxiv has 7 layers. The authors report a total running time of 23.6s. Was this on a 7 GPU setup?
c) More ablation and convergence studies. The authors stop at 200 epochs for the smaller datasets. Does the proposed method reach full-batch GCN accuracy at higher epochs?
Typos and other fixes a) Typo abstract: Should be “ Graph Neural Networks (GNNs) have become”
b) Table 6 is missing.
c) Results: “... but higher than all the other efficient GCN training strategies…” ; LGCN is better for pubmed and citeseer
d) “....when comparing the accuracy curves of different Tlayer shown in Figure 3, we find that large Tlazy makes the training more stable…” i) Hard to see this from Figure 3. Consider plotting running variance as a shaded overlay. ii) Plots are too busy. Hard to draw conclusions.
What can the authors do to make the paper better a) More thorough experimentation as described in the Experiments can be strengthened section.
b) I did not find the code release with the paper. For a paper whose primary claim is computational and memory savings, I think it would be a good idea to include it.
c) Emphasize the novelty of the method otherwise, greedy pre training seems like a lukewarm idea. |
ICLR | Title
Decoupled Greedy Learning of Graph Neural Networks
Abstract
Graph Neural Networks (GNNs) become very popular for graph-related applications due to their superior performance. However, they have been shown to be computationally expensive in large scale settings, because their produced node embeddings have to be computed recursively, which scales exponentially with the number of layers. To address this issue, several sampling-based methods have recently been proposed to perform training on a subset of nodes while maintaining the fidelity of the trained model. In this work, we introduce a decoupled greedy learning method for GNNs (DGL-GNN) that, instead of sampling the input graph, decouples the GNN into smaller modules and associates each module with greedy auxiliary objectives. Our approach allows GNN layers to be updated during the training process without waiting for feedback from successor layers, thus making parallel GNN training possible. Our method achieves improved efficiency without significantly compromising model performances, which would be important for time or memory limited applications. Further, we propose a lazy-update scheme during training to further improve its efficiency. We empirically analyse our proposed DGL-GNN model, and demonstrate its effectiveness and superior efficiency through a range of experiments. Compared to the sampling-based acceleration, our model is more stable, and we do not have to trade-off between efficiency and accuracy. Finally, we note that while here we focus on comparing the decoupled approach as an alternative to other methods, it can also be regarded as complementary, for example, to sampling and other scalability-enhancing improvements of GNN training.
1 INTRODUCTION
Graph Neural Networks (GNN) have been shown to be highly effective in graph-related tasks, such as node classification (Kipf & Welling, 2016), graph classification (Ying et al., 2018b), graph matching (Bai et al., 2019), and recommender system (Ying et al., 2018a). Given a graph of arbitrary size and attributes, GNNs obtain informative node embeddings by first conducting a graph convolution operation to aggregate information from the neighbors of each node, and then transforming the aggregated information. As a result, GNNs can fuse together the topological structure and node features of a graph, and have thus became dominant models for graph-based applications.
Despite its superior representation power, the graph convolution operation has been shown to be expensive when GNNs become deep and wide (Chen et al., 2017). Therefore, training a deep GNN model is challenging for large and dense graphs. Since deep and wide GNNs are becoming increasingly important with the emergence of classification tasks on large graphs, such as the newly proposed OGB datasets (Hu et al., 2020), and semantic segmentation tasks as introduced in (Li et al., 2019), we focus here on studying methods for alleviating computational burdens associated with large-scale GNN training.
Several strategies have been proposed during the past years to alleviate this computation issue of large-scale GNNs. GraphSAGE (Hamilton et al., 2017) took the first step to leverage a neighborhood sampling strategy for GNNs training, which only aggregates a sampled subset of neighbors of each node in the graph convolution operation. However, though this sampling method helps reduce memory and time cost for shallow GNNs, it computes the representation of a node recursively, and the node’s receptive field grows exponentially with the number of GNN layers, which may make
the memory and time cost even goes larger for deeper GNNs when the sample number is big. The work of Chen et al. (2017; 2018); Zou et al. (2019) developed sampling-based stochastic training methods to train GNNs more efficiently and avoid this exponential growth problem. Chiang et al. (2019) proposed a batch learning algorithm by exploiting the graph clustering structure. Beyond the aforementioned methods, recently, You et al. (2020) proposed a layer-wise sequential training algorithm for GNNs, which decouples the aggregation and transformation operations in the per-layer feed-forward process and reduces the time and memory cost during training while not sacrificing too much model capability, this indicates that the GNN layers do not have to be learned jointly. However, the sequential training would bring some inefficiency.
In addition to the inefficiency brought by the graph convolution operation, as discussed in (Belilovsky et al., 2019a), the sequential nature of standard backpropagation also leads to inefficiency. As pointed out in (Jaderberg et al., 2017), backpropagation for deep neural networks suffers an update-locking problem, which means each layer heavily relies on upper layers’ feedback to update itself, and thus, it must wait for the information to propagate through the whole network before updating. This would be a great obstacle for GNN layers to be trained in parallel to alleviate computation pressure under time and memory constraint, and would prohibit the GNN training to be trained in an asynchronous setting.
In this work, using semi-supervised node classification as an example, we show that the greedy learning would help to decouple the optimization of each layer in GNNs and enable GNNs to achieve update-unlocking, i.e., allow the GNN layers to update without getting any feedback from the later layers. By using this decoupled greedy learning for GNNs, we can achieve parallelization of the network layers, which would make the model training much more efficient and would be very important for time or memory limited applications. Moreover, we propose to use a lazy-update scheme during training, which is to exchange information between layers after a certain number of epochs instead of every epoch, this will further improve the efficiency while not sacrificing much performance. We theoretically analyze the computation complexity of our proposed method, and analogue our method to the classic block coordinate descent optimization to enable further analysis. We run a set of experiments to justify our model, and show its great efficiency on all benchmark datasets. On the newly proposed large OGBN-arxiv dataset, when training a 7-layer model, our proposed method even saves 85% time and 66% per-GPU memory cost of the conventionally trained GCN.
Our main contributions can be summarized as follows. First, we introduce a decoupled greedy learning algorithm for GNNs that achieves update-unlocking and enables GNN layer to be trained in parallel. Next, we propose to leverage a lazy-update scheme to improve the training efficiency. We evaluate our proposed training strategy thoroughly on benchmark datasets, and demonstrate it has superior efficiency while not sacrificing much performance. Finally, our method is not limited to the GCN and the node classification task, but can be combined with other scalability-enhancing GNNs and can be applied to other graph-related tasks.
2 RELATED WORK
Before discussing our proposed approach, we review here related work on efficient training strategies for GNNs. The computational complexities the discussed methods are summarized in Table 1, and we refer the reader to Appendix A for detailed computation.
2.1 DEEP GRAPH CONVOLUTIONAL NETWORK (DEEPGCN)
Graph convolutional network (GCN, Kipf & Welling, 2016) is one of the most popular models for graph-related tasks. Given an undirected graph G with node feature matrix X ∈ RN×D and adjacency matrix A ∈ RN×N where N is node number and D is feature dimension, let à = A+ I , D̃ be a diagonal matrix satisfying D̃i,i = ∑N j=1 Ãi,j , and F = D̃
−1/2ÃD̃−1/2 be the normalized Ã, then, the l-th GCN layer will have the output H(l) as H(l) = σ(FH(l−1)W (l)), where σ is the non-linear transformation, and W (l) is the trainable weight matrix at layer l.
As pointed out in Li et al. (2018), when GCN becomes deep, it will suffer severe over-smoothing problem, which mean the nodes will become not distinguishable after stacking too many network layers. However, for applications such as semantic segmentation (Li et al., 2019) or classification
tasks on large datasets (Hu et al., 2020), we do need deeper GCN models. Therefore, we follow the work of Li et al. (2019), alleviating over-smoothing problem by adding residual links between GCN layers and obtain the deepGCN model. The l-th layer of our network model will be H(l) = σ(FH(l−1)W (l)) + H(l−1).
2.2 EFFICIENT GNN TRAINING
To alleviate the expensive computation issue of GNN introduced in previous section, a lot of literature has proposed sampling-based batch-learning algorithms to train GNNs more efficiently.
GraphSAGE (Hamilton et al., 2017) introduced a node sampling strategy (NS), which is to randomly sample s neighbors for each node at each layer, then, for each node, instead of aggregating embeddings of all its neighbors, we only aggregate the sampled ones. VRGCN (Chen et al., 2017) also followed this NS strategy, but it further proposed to leverage history activation to reduce the variance of the estimator. Though NS scheme has smaller complexity compared to full-batch GNN, there exists redundant computation and the complexity grows exponentially with the layer number.
Layer-wise importance sampling strategy (IS) would be a more advanced method for efficient GNN training. FastGCN (Chen et al., 2018) proposed to sample nodes for each layer with a degree-based sampling probability in order to solve the scalability issue in NS. The work of LADIES (Zou et al., 2019) leveraged IS idea as well, but it proposed a layer-dependent importance sampling scheme, which enjoys a smaller variance while maintaining same level complexity as FastGCN. Though this IS is better than NS in general, but we may still have to trade-off the complexity with the performance (i.e. accuracy for classification tasks), since using a large sample number would be helpful for the performance and increase the computation cost and vice versa.
Except for ther aforementioned methods, we also has ClusterGCN (Chiang et al., 2019), which proposes to partition the graph into several clusters then randomly select multiple clusters to form a batch to train the GNN. Though this would allows us to train much deeper GCN without much time and memory overhead, the stability of the performance of this approach would be hard to guarantee, since the performance would heavily depends on the graph clustering settings.
2.3 LAYER-WISE GNN
Layerwise learning for neural networks is first introduced by the work of (Hinton et al., 2006), and was further discussed in (Bengio et al., 2007). The work of (Belilovsky et al., 2019a;b) explored the layerwise CNNs and achieved impressive results.
Recently, (You et al., 2020) proposed a layerwise algorithm for GNN training. The key idea is to train GNNs layer by layer sequentially. Figure 1 illustrates the sequential training framework for layerwise GNN. For a L-layer GNN, we first train its first layer with an auxiliary classifier, then after it get fully converged, we fix this layer, and start to optimize next layer, we do the same things for all L layers. This layerwise training saves us a lot of memory, since this method only requires us
to focus on one layer and only need to store one layer’s activation results. Besides the clear memory saving, this layerwise training scheme also saves us a lot of time. During the learning process, it can decouple the two key components in the per-layer feed-forward graph convolution: aggregation and transformation. Then for each layer, it only needs to conduct the aggregation once at the beginning of the training, and then only need to do the transformation step at each iteration, which greatly reduce the time cost. According to the reported results, this sequentially-trained layerwise method is efficient than the joint learning strategy and can get us good performance. However, its sequential scheme would bring some inefficiency because one layer has to wait until its previous layers to get fully converged to start training. In our work, we solve this problem and enable the model parallelization to further improve the effciency.
3 PROPOSED APPROACH
3.1 MODEL ARCHITECTURE
As mentioned in section 2.1, we introduce our proposed algorithm with deepGCN model (i.e., the GCN model with residual link) since the residual link would help to alleviate GCN’s oversmoothing problem when it goes deep, which would be important for large scale scenarios.
There exist two ways to train such GCN model: conventional training and sequential layerwise training. We illustrate these two strategies with the high-level framework shown in Figure 1. For conventional training, we jointly optimize the learnable parameters in all layers and in the classifier. For layerwise training, we break the training for a L-layer GNN into L sequential stages, each stage has to wait all its previous layers to get fully converged to start training.
Note that, the sequential layerwise training has the advantage that it can save time and memory while not compromising too much performance,
this suggests its promising applications in large scale models under hardware and time constraints. We now consider, whether we can extend it to a parallel version, so that the efficiency can be further improved? Interestingly, as shown in the following sections, we find the answer is affirmative.
3.2 DECOUPLED GREEDY LEARNING ALGORITHM
To enable parallel GNN training, the most challenging problem is update-locking. Before updating one layer, we have to wait after the signal has been passed through all its successors, which would bring inefficiency. To alleviate this problem, we follow the design of layerwise GNN: decoupling the GNN model into different layers, associating each layer with an auxiliary classifier, which is a MLP layer with softmax activation, and assigning a per-layer greedy objective. Then, with the output activation of a given layer, we can leverage the auxiliary classifier to optimize the per-layer objective and therefore can update the current layer without any feedback from its successors while the rest lay-
Algorithm 1 Decoupled Greedy Learning (DGL) of GNNs Require: Normalized Adjacency Matrix F ; Feature Matrix X; Labels Y ; Total Number of Itera-
tions T ; Total Number of Layers L. 1: Initialize: H(0) = X; 2: for t = 1 to T do 3: for l = 1 to L do 4: H(l) = σ(FH(l−1)W (l)) // Get node embeddings and store them as H(l). 5: (W (l),Θ(l))← Update with∇loss(W (l),Θ(l))(Y ,H(l−1),F ;W (l),Θ(l)) // Update parameters. 6: end for 7: end for
Algorithm 2 Decoupled Greedy Learning (DGL) of GNNs with Lazy Update Scheme Require: Normalized Adjacency Matrix F ; Feature Matrix X; Labels Y ; Total Number of Itera-
tions T ; Total Number of Layers L; Waiting time Tlazy. 1: Initialize: Ĥ(0) = FX; 2: for t = 1 to T do 3: for l = 1 to L do 4: H(l) =σ(Ĥ(l−1)W (l)) // Get node embeddings. 5: (W (l),Θ(l))←Update with∇loss(W (l),Θ(l))(Y , Ĥ(l−1);W (l),Θ(l)) //Update parameters. 6: if (t mod Tlazy == 0) then 7: Ĥ(l) = FH(l) // Get propagated node embeddings and store them as Ĥ(l). 8: end if 9: end for
10: end for
ers are still in the forward process. We name our training strategy as Decoupled Greedy Learning of GNNs (DGL-GNN).
With our DGL-GNN, we achieve update-unlocking, and therefore can enable parallel training for layerwise GNNs. For clarity, we provide Figure 2 to compare the signal propagation process of the conventionally trained GNN, sequentially trained layerwise GNN, and the parallel trained GNN. With this illustration, we can observe that the parallel training of layerwise GNN can avoid the case in which one layer is forwarding or back-propagating the signal while other layers are idle. Therefore, that given same number of batches of data, the parallel version would finish training much earlier than the conventional and the sequential version.
Following our notations in section 2, we denote by F the normalized adjacency matrix, H(l) the output activation of l-th layer, and W (l) the learnable parameters for l-th layer. Plus, let Y be the labels, Θ(l) be the parameters for l-th layer’s classifier, and loss be the cross-entropy loss which is frequently used for classification tasks. Then, we have the per-layer objective function: loss(W (l),Θ(l))(Y ,H(l−1),F ;W (l),Θ(l)). We now formally define our DGL-GNN training method in algorithm 1. Note that, the inner for-loop can be done in a parallel manner, i.e., when the l−th layer is working on the backward process as given in line 5, the (l + 1)−th layer can start forward propagation as given in line 4. Therefore, we claim our DGL-GNN algorithm can achieve update-unlocking.
We then empirically observed that, without passing the signal to next layer immediately after the forward process, we still get same-level performance. Thus, we find that the efficiency of DGLGNN can be further improved by leveraging an Lazy Update scheme (LU-DGL-GNN). Instead of using the up-to-date activation output from its predecessor, one layer can use the history activation to learn its parameters and only update the history activation a few times during the overall training process. Then, same as sequential trained layerwise GNN, we only need to conduct the aggregation step for one time after each update, this saves us a lot time. We denote by Ĥ(l) the aggregated stored history activation for layer l. We now formally define the LU-DGL-GNN method in algorithm 2, we marked its difference with DGL-GNN in blue.
To sum-up, our proposed DGL-GNN and LU-DGL-GNN methods enjoys a very high efficiency because we introduce an auxiliary greedy objective for each layer and thus achieve update-unlocking, we then decouple the model into layers and therefore enable the model to be trained in parallel, and we finally propose to leverage the lazy-update scheme, with which we can avoid redundant computation in the aggregation step and further reduce the training time.
4 COMPLEXITY ANALYSIS
As shown in Table 1, our proposed methods achieve a lower complexity compared to the conventional training and other baselines. Note that DGL-GCN can be regard as a special case in which Twait = 1, i.e. we update the stored activation every epoch. Therefore, we focus on the complexity justification for LU-DGL-GCN.
For time complexity, first, we know that the training process consists two fundamental operations: aggregation and transformation. The time complexity of aggregation is O(‖A‖0K), and the time complexity of the transformation step is O(NK2). Then, we note that, for LU-DGL-GCN, during the full learning process, for each layer, we have to do aggregation T/Twait times, and we have to do the transformation 2T times because this step should be conducted for both the GNN layer and the auxiliary classifier. Since the computation for each layer is done in parallel, we know that the overall time complexity for LU-DGL-GCN should be O(T‖A‖0K/Twait + 2TNK2). In practice, if we put different layers on different GPUs and do the training in parallel, there would be some extra non-negligible time cost for GPU communication.
For memory complexity, it also consists two components. We have to store two things for each layer during the training: the history activation and the intermediate learnable weight matrices for GNN and for the auxiliary classifier. The activation takes O(NK), and the two types of weight matrix take O(2K2) space. Again, when we do the training in a parallel fashion and assign the layers to different machines, the per-GPU memory would only be O(NK + 2K2), which is significantly reduced compare to most of the existing baselines.
5 ANALOGY TO BLOCK COORDINATE DESCENT
To justify the rationality of the proposed model, we present here an analogy of our decoupling approach to the classic Block Coordinate Descent (BCD) optimization strategy and its variants (Wright, 2015; Wu et al., 2008; Shi et al., 2016), which for completeness are discussed in Appendix A.2. We observe that, the sequential layerwise GNN share similar high-level idea with the BCD method. We regard each layer and its associated auxiliary classifier as a module. Then, for each module, all its parameters can be treated as a coordinate block, we order the coordinate block according to which layer it corresponds to. Note that, in BCD, for each iteration, we choose one coordinate block and optimize the overall training objective with respect to the chosen block. So if we keep choosing the first coordinate block until it fully converged, then keep choosing the second coordinate block, etc., until the last coordinate block fully converge, then this optimization process is the same as the learning process of a sequentially trained layerwise GNN.
We also oberve that, the DGL-GCN can be analog to the synchronous parallel BCD and the LUDGL-GCN can be regard as an analogy of asynchronous parallel BCD. Note that our DGL-GCN and LU-DGL-GCN can be implemented in a parallel fashion, and their key difference is whether all the layers share a consistency and up-to-date information. Therefore, if we make the same analogy of learnable parameters and the coordinate blocks as in the above sequential version, then it would be easy to find the similarity between parallel BCD and our decoupled greedy learning methods. With such analogy, it would allow us to leverage existing theorems for BCD optimization to better understand and analyze the DGL-GCN and LU-DGL-GCN.
6 EMPIRICAL RESULTS
We evaluate our proposed algorithms with the multi-class node classification task. However, it should be noted that the decoupled greedy learning method can also be applied in other graph-related tasks and is not limited to node classification.
We use the following four public datasets for evaluation: cora, citeseer, pubmed (Sen et al., 2008), and OGBN-arxiv (Hu et al., 2020). We briefly introduce these datasets and summarize their statistic in Appendix A.3. We compare our method against several baseline models introduced in Section 2: Full-Batch GCN (Kipf & Welling, 2016), FastGCN (Chen et al., 2018), LADIES (Zou et al., 2019), and LGCN (You et al., 2020). For all the methods, we use the same deepGCN model architecture and set all the hidden-dimension as 128. We follow the public implementations of all the baselines, and use their parameter settings. We conduct the training for 10 times and take the mean and variance of the evaluation results. Further implementation details are provided in Appendix A.3.
We evaluate the performance of different methods with the following evaluation metrics: Accuracy (%): The micro F1-score of the test data at the convergence point. Memory (MiB): The maximum per-GPU memory cost during training. Total Running Time (s): The total training time (exclude validation) before convergence.
We summarize the classification performance results in Table 6, which demonstrates the efficacy of our approach. We can see that, for small-scale datasets such as Cora, Citeseer, Pubmed, our LU-DGL-GCN is the fastest among all the baselines, and its accuracy is only lower than FullBatch GCN but higher than all the other efficient GCN training strategies. Since we only use a 2-layer GCN architecture for these small-scale datasets, we can not see much memory advantage. For the large-scale OGBN-arxiv dataset, we use a deeper 7-layer GCN architecture, then we can find that, compared to Full-Batch GCN, our LU-DGL-GCN is much faster and can save per-GPU memory while only sacrifice minor performance. Compared to sampling-base methods (LADIES and FastGCN), our LU-DGL-GCN has a better accuracy and is faster, while only need slightly more per-GPU memory. Note that for here, we set the sample number as 64 for FastGCN and LADIES, if we increase the sample number, we will obtain a higher accuracy but the time cost and memory cost would be larger. Compared to LGCN which is a sequantially-trained layerwise GCN, our LU-DGLGCN has a clear advantage on the time cost, and can even obtain a little accuracy improvement. Our experimental results together with our analysis in previous sections indicate that our LU-DGLGNN would be very helpful for time and memory limited applications, especially for the scenarios in which we need deep models. Furthermore, to establish the importance of different components of our method, we isolate certain aspects of it via the following ablation study.
Importance of Lazy Update Scheme. We illustrate the advantage of the Lazy Update Scheme and show how the waiting time Tlazy will influence the performance. We use the cora and OGBN-arxiv datasets as examples for small-size and large-size scenarios, and follow previous model architectures and parameter. We run the model for 200 epochs and compare the obtained accuracy, total running time, and memory. We summarize the results in Table 6. We find that, with the lazy update scheme, we can greatly reduce the time cost, and a proper waiting time Tlazy would help to improve the accuracy a little bit since it can alleviate overfitting. In addition, when comparing the accuracy curves of different Tlayer shown in Figure 3, we find that large Tlazy makes the training more stable.
Sequential Training v.s. Parallel Training. Finally, we briefly compare the sequential training and our parallel training of the greedy objective. Again, we use cora and OGBN-arxiv as examples and follow the above experiment settings. As shown in Figure 3, we find that in terms of accuracy, parallel training can quickly catch up with the sequential training, and is less likely to overfit.
7 CONCLUSIONS
In this paper, we focus on the efficiency issue of GNN training in large-scale applications and present a decoupled greedy GNN learning strategy. Our proposed DGL-GNN model achieves updateunlocking by introducing greedy auxiliary objectives during training, and enables parallelization by decoupling the GNN into smaller modules. We also propose the LU-DGL-GNN method, which leverages a lazy-update scheme during training to further improve the model efficiency. We empirically analyze our proposed DGL-GNN model and demonstrate its effectiveness and superior efficiency through a range of experiments. We note that while we introduce our proposed method with deepGCN model and use the semi-supervised node classification task as an example, the DGL-GNN and LU-DGL-GNN methods are not limited to this setting, and can be applied to other GNNs and graph-related downstream tasks. Further, while here we focus on comparing the decoupled approach as an alternative to other sampling-based methods respect to their accuracy and efficiency, these approaches can be regarded as complementary to each other. By combining the decoupled greedy learning method with other scalability-enhancing improvements of GNN training, the computation cost would be further reduced, which poses a promising direction for future work.
A APPENDIX
A.1 COMPLEXITY ANALYSIS FOR BASELINES
In this section, we explain how we compute the memory and time complexity in Table 1.
All the aforementioned methods’s memory cost consists two parts: intermediate embedding matrices storage and weight matrices storage. The weight matrices always needO(LK2) since it has to store L weight matrices with dimension K ×K. The intermediate embedding has different memory cost for different methods. In terms of time complexity, it also consists two parts: the aggregation time cost and transformation time cost. These two parts varies for different methods.
Full-batch GCN stores all the intermediate embedding matrices for all the L layers, and each matrice has N nodes with dimension K, so its memory complexity for intermediate embedding storage would be O(LNK). Therefore, its total memory complexity is O(LNK + LK2). In terms of time complexity, the propagation step which is a sparse-matrix multiplication has time complexity O(‖A‖0K), and the transformation step which is a dense-matrix multiplication has time complexity O(NK2). During the training for L layers and T iterations, the total time complexity would be O(TL‖A‖0K + TLNK2).
For GraphSAGE, it has to storeO(bsL−1node)K-dimension node embeddings, so it has a total memory complexity O(bKsL−1node + LK2). In terms of time complexity, for each batch, to update one node, we have to updateO(sL−1node) activations, each needsO(K∫\ode) for aggregation and K2 for transformation. Therefore, the total time complexity for GraphSAGE is O(bTKsLnode + bTK2s L−1 node).
For VR-GCN, it stores all historical activations, which takes a memory of O(LNK) and will lead to the total memory complexity O(LNK + LK2). The time complexity can be analyzed similarly as GraphSAGE, which would be O(bT D̄KsL−1node + bK2s L−1 node).
For FastGCN, it only has to store b node embeddings in the last layer, and (L − 1)slayer node embeddings in the previous L− 1 layers, so the total memory complexity would be O(bK + (L− 1)Kslayer) +LK
2). In terms of time complexity, we need O(bTKslayer + (L− 1)TKs2layer) for aggregation andO((bTK2+(L−1)Tslayer)K2) for transformation. Note that, we have b < slayer, so we ignore the relatively small terms and get the total memory complexity O(LKslayer + LK2), and total time complexity O(TLKs2layer + TLK2slayer).
For LADIES, its is also a layer-wise sampling method. Though it has different sampling strategy as FastGCN, their memory cost and time cost are the same. There fore, LADIES also has memory complexity O(LKslayer + LK2) and time complexity and O(TLKs2layer + TLK2slayer).
A.2 SUPPLEMENTARY DISCUSSION OF COORDINATE AND BLOCK COORDINATE DESCENT
Coordinate descent (CD) is a classic iterative optimization algorithm that solves an optimization problem by approximately minimizing the objective along each coordinate directions successively.
In each iteration, it would choose one variable, fix the other components, then optimizing the objective with respect to only the single variable. By doing so, we only need to solve a lower dimensional minimization problem at each iteration, which would be easier. CD algorithm has been discussed in various literatures and has been used in applications for a long time (Wright, 2015; Wu et al., 2008; Shi et al., 2016).
Block coordinate descent (BCD) is an extension of CD method. The difference of BCD and the conventional CD is that BCD will do the searching along a coordinate hyperplane instead of a single coordinate direction (Beck & Tetruashvili, 2013), i.e. it groups variables into blocks, and approximately minimizing the objective with respect to only one block of variables at each iteration while fixing the others.
For BCD algorithm, there exists its parallel implementations. As introduced in the work of (Wright, 2015), we can categorize them into two types: synchronous and asynchronous. For synchronous parallel BCD, we partition the computation into pieces and put different pieces on different processors, each processor will update a part of the variables in parallel, then a synchronization step should be conducted to guarantee the consistency of the information shared among all processors before further computation. For asynchronous setting, the difference is we do not have to do the synchronization. As discussed in Section 5, our proposed method can be regarded as analogous to BCD and its aforementioned variants.
A.3 SUPPLEMENTARY FOR EXPERIMENTS
Dataset Statistics We use four benchmark dataset: Cora, Citeseer, Pubmed, Reddit, and OGBNarxiv for the node classification task. The detailed statictics of theses datasets are shown in Table 4.
Hardware We run the experiments on Tesla V100 GPU (16GB).
Baselines Detailed introduction of our baselines can be found in Section 2. As introduced before, as NS methods are in general less competitive than the IS methods, we only compare against the IS strategies.
Model Architecture For Cora, Citeseer, Pubmed, we set the number of layers as 2 for all the methods. For OGBN-arxiv, we set it to be 7.
Parameter Settings For all the methods and datasets, we conduct training for 10 times and record their mean variance. For small datasets (Cora, Citeseer, Pubmed), we set the epoch number as 200, for OGBN-arxiv, we set the epoch number as 500. We choose the model with the best validation performance as convergence point. For the sampling-based methods: FastGCN and LADIES, we set the sample number as 64, increasing this number would improve the accuracy a little bit, but would increase the memory and time cost. | 1. What is the main contribution of the paper regarding GNNs?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of time and memory efficiency and parallelization strategy?
3. How does the method compare to other approaches in terms of representation accuracy and model complexity?
4. Are there any limitations or tradeoffs associated with the proposed method, especially regarding the decoupling of inference and aggregation?
5. What are some potential future research directions related to this work? | Review | Review
GNN is known to be computationally complicated and it requires significant amount of time and memory. Unlike CNN, inference of the node embedding in GNN requires information be propagated throughout entire graph for each epoch. In this paper, the authors proposed to decouple the inference and aggregation in each layer and learn the layer-wise latent representation separately. The result is to decompose the deep GNN into a series of shadow networks that are connected sequentially. Since the optimization is done independently, it requires significant less time and memory.
This paper is clearly written and the contribution, literature review is sufficient for reader to understand. The analogy to block coordinate descent is a good part for reader to connect this work with the well-established work in optimization theory. The idea of layerwise decomposition is published in [You et al., 2020] and this paper improve over that by introducing more parallelization.
Pros:
The layer-wise decomposition is not a new idea but to optimize GNN, it is not as popular as in normal DL framework. The study on this topic will definitely brings some attention to the community and is enouraged.
The parallelization strategy and lazy update scheme is useful and easy to implement
In the experiment, it shows that the time reduction of LU-DGL-GCN is significant and it does not depends the depth of model. Therefore it is applicable to larger graph and deep structure.
Cons:
Unlike the traditional convex optimization where the block coordinate decent is know to converge to optimal point. In the settings of GCN, there is no proof on how close the block coordinate descent would be towards to traditional GCN.
Following the above question, it is known that the message-passing and aggregation in one layer only accumulate the information in 1-hop neighborhood and as a result, the embedding it learned only encodes the local structure within the 1-hop neighborhood. In many applications, the structural information of the graph can only be obtained through a wider p-hop neighborhood where p > 1. Traditional GNN take inference on p-hop neighborhood through optimization of p-layers all together in backpropagation. While the layer decomposition is attractive in time, LU-DGL-GCN loss the accuracy to infer the structural information in larger neighborhood. And because each layer is optimized using independent classifier on the same target, it is not hard to expect the node representation learned in the bottom layer to be less representative in LU-DGL-GCN compared to the traditional GCN as it does not encode the information beyond the 1-hop neighborhood.
In early DL, the layer-wise optimization only used as pre-training to obtain a good weight initialization, and after that, a further full training is needed to obtain a representative embedding and model. It is suggested to conduct a full training with weight learned by LU-DGL-GCN and to compare with traditional GCN and to see if there are better solutions.
It is better to discuss the limitation of the decoupled greedy learning, i.e. the model is myopic in that the resulting layer-wise node embedding only collect information in nearest neighbors. This is esp. an issue for lower-level representation since it lacks of prior learned from upper layer to guide its inference.
The LU-DGL-GCN is a cascade of L independent shadow network models as opposed to one L-layer deep model. It is known that the representative and expressive power of the deep models is higher than that of a sequential connected independent shadow models. Thus compared to LU-DGL-GCN, it sacrifices the representation accuracy and model complexity to obtain a faster training and less memory. This is the tradeoff. |
ICLR | Title
Decoupled Greedy Learning of Graph Neural Networks
Abstract
Graph Neural Networks (GNNs) become very popular for graph-related applications due to their superior performance. However, they have been shown to be computationally expensive in large scale settings, because their produced node embeddings have to be computed recursively, which scales exponentially with the number of layers. To address this issue, several sampling-based methods have recently been proposed to perform training on a subset of nodes while maintaining the fidelity of the trained model. In this work, we introduce a decoupled greedy learning method for GNNs (DGL-GNN) that, instead of sampling the input graph, decouples the GNN into smaller modules and associates each module with greedy auxiliary objectives. Our approach allows GNN layers to be updated during the training process without waiting for feedback from successor layers, thus making parallel GNN training possible. Our method achieves improved efficiency without significantly compromising model performances, which would be important for time or memory limited applications. Further, we propose a lazy-update scheme during training to further improve its efficiency. We empirically analyse our proposed DGL-GNN model, and demonstrate its effectiveness and superior efficiency through a range of experiments. Compared to the sampling-based acceleration, our model is more stable, and we do not have to trade-off between efficiency and accuracy. Finally, we note that while here we focus on comparing the decoupled approach as an alternative to other methods, it can also be regarded as complementary, for example, to sampling and other scalability-enhancing improvements of GNN training.
1 INTRODUCTION
Graph Neural Networks (GNN) have been shown to be highly effective in graph-related tasks, such as node classification (Kipf & Welling, 2016), graph classification (Ying et al., 2018b), graph matching (Bai et al., 2019), and recommender system (Ying et al., 2018a). Given a graph of arbitrary size and attributes, GNNs obtain informative node embeddings by first conducting a graph convolution operation to aggregate information from the neighbors of each node, and then transforming the aggregated information. As a result, GNNs can fuse together the topological structure and node features of a graph, and have thus became dominant models for graph-based applications.
Despite its superior representation power, the graph convolution operation has been shown to be expensive when GNNs become deep and wide (Chen et al., 2017). Therefore, training a deep GNN model is challenging for large and dense graphs. Since deep and wide GNNs are becoming increasingly important with the emergence of classification tasks on large graphs, such as the newly proposed OGB datasets (Hu et al., 2020), and semantic segmentation tasks as introduced in (Li et al., 2019), we focus here on studying methods for alleviating computational burdens associated with large-scale GNN training.
Several strategies have been proposed during the past years to alleviate this computation issue of large-scale GNNs. GraphSAGE (Hamilton et al., 2017) took the first step to leverage a neighborhood sampling strategy for GNNs training, which only aggregates a sampled subset of neighbors of each node in the graph convolution operation. However, though this sampling method helps reduce memory and time cost for shallow GNNs, it computes the representation of a node recursively, and the node’s receptive field grows exponentially with the number of GNN layers, which may make
the memory and time cost even goes larger for deeper GNNs when the sample number is big. The work of Chen et al. (2017; 2018); Zou et al. (2019) developed sampling-based stochastic training methods to train GNNs more efficiently and avoid this exponential growth problem. Chiang et al. (2019) proposed a batch learning algorithm by exploiting the graph clustering structure. Beyond the aforementioned methods, recently, You et al. (2020) proposed a layer-wise sequential training algorithm for GNNs, which decouples the aggregation and transformation operations in the per-layer feed-forward process and reduces the time and memory cost during training while not sacrificing too much model capability, this indicates that the GNN layers do not have to be learned jointly. However, the sequential training would bring some inefficiency.
In addition to the inefficiency brought by the graph convolution operation, as discussed in (Belilovsky et al., 2019a), the sequential nature of standard backpropagation also leads to inefficiency. As pointed out in (Jaderberg et al., 2017), backpropagation for deep neural networks suffers an update-locking problem, which means each layer heavily relies on upper layers’ feedback to update itself, and thus, it must wait for the information to propagate through the whole network before updating. This would be a great obstacle for GNN layers to be trained in parallel to alleviate computation pressure under time and memory constraint, and would prohibit the GNN training to be trained in an asynchronous setting.
In this work, using semi-supervised node classification as an example, we show that the greedy learning would help to decouple the optimization of each layer in GNNs and enable GNNs to achieve update-unlocking, i.e., allow the GNN layers to update without getting any feedback from the later layers. By using this decoupled greedy learning for GNNs, we can achieve parallelization of the network layers, which would make the model training much more efficient and would be very important for time or memory limited applications. Moreover, we propose to use a lazy-update scheme during training, which is to exchange information between layers after a certain number of epochs instead of every epoch, this will further improve the efficiency while not sacrificing much performance. We theoretically analyze the computation complexity of our proposed method, and analogue our method to the classic block coordinate descent optimization to enable further analysis. We run a set of experiments to justify our model, and show its great efficiency on all benchmark datasets. On the newly proposed large OGBN-arxiv dataset, when training a 7-layer model, our proposed method even saves 85% time and 66% per-GPU memory cost of the conventionally trained GCN.
Our main contributions can be summarized as follows. First, we introduce a decoupled greedy learning algorithm for GNNs that achieves update-unlocking and enables GNN layer to be trained in parallel. Next, we propose to leverage a lazy-update scheme to improve the training efficiency. We evaluate our proposed training strategy thoroughly on benchmark datasets, and demonstrate it has superior efficiency while not sacrificing much performance. Finally, our method is not limited to the GCN and the node classification task, but can be combined with other scalability-enhancing GNNs and can be applied to other graph-related tasks.
2 RELATED WORK
Before discussing our proposed approach, we review here related work on efficient training strategies for GNNs. The computational complexities the discussed methods are summarized in Table 1, and we refer the reader to Appendix A for detailed computation.
2.1 DEEP GRAPH CONVOLUTIONAL NETWORK (DEEPGCN)
Graph convolutional network (GCN, Kipf & Welling, 2016) is one of the most popular models for graph-related tasks. Given an undirected graph G with node feature matrix X ∈ RN×D and adjacency matrix A ∈ RN×N where N is node number and D is feature dimension, let à = A+ I , D̃ be a diagonal matrix satisfying D̃i,i = ∑N j=1 Ãi,j , and F = D̃
−1/2ÃD̃−1/2 be the normalized Ã, then, the l-th GCN layer will have the output H(l) as H(l) = σ(FH(l−1)W (l)), where σ is the non-linear transformation, and W (l) is the trainable weight matrix at layer l.
As pointed out in Li et al. (2018), when GCN becomes deep, it will suffer severe over-smoothing problem, which mean the nodes will become not distinguishable after stacking too many network layers. However, for applications such as semantic segmentation (Li et al., 2019) or classification
tasks on large datasets (Hu et al., 2020), we do need deeper GCN models. Therefore, we follow the work of Li et al. (2019), alleviating over-smoothing problem by adding residual links between GCN layers and obtain the deepGCN model. The l-th layer of our network model will be H(l) = σ(FH(l−1)W (l)) + H(l−1).
2.2 EFFICIENT GNN TRAINING
To alleviate the expensive computation issue of GNN introduced in previous section, a lot of literature has proposed sampling-based batch-learning algorithms to train GNNs more efficiently.
GraphSAGE (Hamilton et al., 2017) introduced a node sampling strategy (NS), which is to randomly sample s neighbors for each node at each layer, then, for each node, instead of aggregating embeddings of all its neighbors, we only aggregate the sampled ones. VRGCN (Chen et al., 2017) also followed this NS strategy, but it further proposed to leverage history activation to reduce the variance of the estimator. Though NS scheme has smaller complexity compared to full-batch GNN, there exists redundant computation and the complexity grows exponentially with the layer number.
Layer-wise importance sampling strategy (IS) would be a more advanced method for efficient GNN training. FastGCN (Chen et al., 2018) proposed to sample nodes for each layer with a degree-based sampling probability in order to solve the scalability issue in NS. The work of LADIES (Zou et al., 2019) leveraged IS idea as well, but it proposed a layer-dependent importance sampling scheme, which enjoys a smaller variance while maintaining same level complexity as FastGCN. Though this IS is better than NS in general, but we may still have to trade-off the complexity with the performance (i.e. accuracy for classification tasks), since using a large sample number would be helpful for the performance and increase the computation cost and vice versa.
Except for ther aforementioned methods, we also has ClusterGCN (Chiang et al., 2019), which proposes to partition the graph into several clusters then randomly select multiple clusters to form a batch to train the GNN. Though this would allows us to train much deeper GCN without much time and memory overhead, the stability of the performance of this approach would be hard to guarantee, since the performance would heavily depends on the graph clustering settings.
2.3 LAYER-WISE GNN
Layerwise learning for neural networks is first introduced by the work of (Hinton et al., 2006), and was further discussed in (Bengio et al., 2007). The work of (Belilovsky et al., 2019a;b) explored the layerwise CNNs and achieved impressive results.
Recently, (You et al., 2020) proposed a layerwise algorithm for GNN training. The key idea is to train GNNs layer by layer sequentially. Figure 1 illustrates the sequential training framework for layerwise GNN. For a L-layer GNN, we first train its first layer with an auxiliary classifier, then after it get fully converged, we fix this layer, and start to optimize next layer, we do the same things for all L layers. This layerwise training saves us a lot of memory, since this method only requires us
to focus on one layer and only need to store one layer’s activation results. Besides the clear memory saving, this layerwise training scheme also saves us a lot of time. During the learning process, it can decouple the two key components in the per-layer feed-forward graph convolution: aggregation and transformation. Then for each layer, it only needs to conduct the aggregation once at the beginning of the training, and then only need to do the transformation step at each iteration, which greatly reduce the time cost. According to the reported results, this sequentially-trained layerwise method is efficient than the joint learning strategy and can get us good performance. However, its sequential scheme would bring some inefficiency because one layer has to wait until its previous layers to get fully converged to start training. In our work, we solve this problem and enable the model parallelization to further improve the effciency.
3 PROPOSED APPROACH
3.1 MODEL ARCHITECTURE
As mentioned in section 2.1, we introduce our proposed algorithm with deepGCN model (i.e., the GCN model with residual link) since the residual link would help to alleviate GCN’s oversmoothing problem when it goes deep, which would be important for large scale scenarios.
There exist two ways to train such GCN model: conventional training and sequential layerwise training. We illustrate these two strategies with the high-level framework shown in Figure 1. For conventional training, we jointly optimize the learnable parameters in all layers and in the classifier. For layerwise training, we break the training for a L-layer GNN into L sequential stages, each stage has to wait all its previous layers to get fully converged to start training.
Note that, the sequential layerwise training has the advantage that it can save time and memory while not compromising too much performance,
this suggests its promising applications in large scale models under hardware and time constraints. We now consider, whether we can extend it to a parallel version, so that the efficiency can be further improved? Interestingly, as shown in the following sections, we find the answer is affirmative.
3.2 DECOUPLED GREEDY LEARNING ALGORITHM
To enable parallel GNN training, the most challenging problem is update-locking. Before updating one layer, we have to wait after the signal has been passed through all its successors, which would bring inefficiency. To alleviate this problem, we follow the design of layerwise GNN: decoupling the GNN model into different layers, associating each layer with an auxiliary classifier, which is a MLP layer with softmax activation, and assigning a per-layer greedy objective. Then, with the output activation of a given layer, we can leverage the auxiliary classifier to optimize the per-layer objective and therefore can update the current layer without any feedback from its successors while the rest lay-
Algorithm 1 Decoupled Greedy Learning (DGL) of GNNs Require: Normalized Adjacency Matrix F ; Feature Matrix X; Labels Y ; Total Number of Itera-
tions T ; Total Number of Layers L. 1: Initialize: H(0) = X; 2: for t = 1 to T do 3: for l = 1 to L do 4: H(l) = σ(FH(l−1)W (l)) // Get node embeddings and store them as H(l). 5: (W (l),Θ(l))← Update with∇loss(W (l),Θ(l))(Y ,H(l−1),F ;W (l),Θ(l)) // Update parameters. 6: end for 7: end for
Algorithm 2 Decoupled Greedy Learning (DGL) of GNNs with Lazy Update Scheme Require: Normalized Adjacency Matrix F ; Feature Matrix X; Labels Y ; Total Number of Itera-
tions T ; Total Number of Layers L; Waiting time Tlazy. 1: Initialize: Ĥ(0) = FX; 2: for t = 1 to T do 3: for l = 1 to L do 4: H(l) =σ(Ĥ(l−1)W (l)) // Get node embeddings. 5: (W (l),Θ(l))←Update with∇loss(W (l),Θ(l))(Y , Ĥ(l−1);W (l),Θ(l)) //Update parameters. 6: if (t mod Tlazy == 0) then 7: Ĥ(l) = FH(l) // Get propagated node embeddings and store them as Ĥ(l). 8: end if 9: end for
10: end for
ers are still in the forward process. We name our training strategy as Decoupled Greedy Learning of GNNs (DGL-GNN).
With our DGL-GNN, we achieve update-unlocking, and therefore can enable parallel training for layerwise GNNs. For clarity, we provide Figure 2 to compare the signal propagation process of the conventionally trained GNN, sequentially trained layerwise GNN, and the parallel trained GNN. With this illustration, we can observe that the parallel training of layerwise GNN can avoid the case in which one layer is forwarding or back-propagating the signal while other layers are idle. Therefore, that given same number of batches of data, the parallel version would finish training much earlier than the conventional and the sequential version.
Following our notations in section 2, we denote by F the normalized adjacency matrix, H(l) the output activation of l-th layer, and W (l) the learnable parameters for l-th layer. Plus, let Y be the labels, Θ(l) be the parameters for l-th layer’s classifier, and loss be the cross-entropy loss which is frequently used for classification tasks. Then, we have the per-layer objective function: loss(W (l),Θ(l))(Y ,H(l−1),F ;W (l),Θ(l)). We now formally define our DGL-GNN training method in algorithm 1. Note that, the inner for-loop can be done in a parallel manner, i.e., when the l−th layer is working on the backward process as given in line 5, the (l + 1)−th layer can start forward propagation as given in line 4. Therefore, we claim our DGL-GNN algorithm can achieve update-unlocking.
We then empirically observed that, without passing the signal to next layer immediately after the forward process, we still get same-level performance. Thus, we find that the efficiency of DGLGNN can be further improved by leveraging an Lazy Update scheme (LU-DGL-GNN). Instead of using the up-to-date activation output from its predecessor, one layer can use the history activation to learn its parameters and only update the history activation a few times during the overall training process. Then, same as sequential trained layerwise GNN, we only need to conduct the aggregation step for one time after each update, this saves us a lot time. We denote by Ĥ(l) the aggregated stored history activation for layer l. We now formally define the LU-DGL-GNN method in algorithm 2, we marked its difference with DGL-GNN in blue.
To sum-up, our proposed DGL-GNN and LU-DGL-GNN methods enjoys a very high efficiency because we introduce an auxiliary greedy objective for each layer and thus achieve update-unlocking, we then decouple the model into layers and therefore enable the model to be trained in parallel, and we finally propose to leverage the lazy-update scheme, with which we can avoid redundant computation in the aggregation step and further reduce the training time.
4 COMPLEXITY ANALYSIS
As shown in Table 1, our proposed methods achieve a lower complexity compared to the conventional training and other baselines. Note that DGL-GCN can be regard as a special case in which Twait = 1, i.e. we update the stored activation every epoch. Therefore, we focus on the complexity justification for LU-DGL-GCN.
For time complexity, first, we know that the training process consists two fundamental operations: aggregation and transformation. The time complexity of aggregation is O(‖A‖0K), and the time complexity of the transformation step is O(NK2). Then, we note that, for LU-DGL-GCN, during the full learning process, for each layer, we have to do aggregation T/Twait times, and we have to do the transformation 2T times because this step should be conducted for both the GNN layer and the auxiliary classifier. Since the computation for each layer is done in parallel, we know that the overall time complexity for LU-DGL-GCN should be O(T‖A‖0K/Twait + 2TNK2). In practice, if we put different layers on different GPUs and do the training in parallel, there would be some extra non-negligible time cost for GPU communication.
For memory complexity, it also consists two components. We have to store two things for each layer during the training: the history activation and the intermediate learnable weight matrices for GNN and for the auxiliary classifier. The activation takes O(NK), and the two types of weight matrix take O(2K2) space. Again, when we do the training in a parallel fashion and assign the layers to different machines, the per-GPU memory would only be O(NK + 2K2), which is significantly reduced compare to most of the existing baselines.
5 ANALOGY TO BLOCK COORDINATE DESCENT
To justify the rationality of the proposed model, we present here an analogy of our decoupling approach to the classic Block Coordinate Descent (BCD) optimization strategy and its variants (Wright, 2015; Wu et al., 2008; Shi et al., 2016), which for completeness are discussed in Appendix A.2. We observe that, the sequential layerwise GNN share similar high-level idea with the BCD method. We regard each layer and its associated auxiliary classifier as a module. Then, for each module, all its parameters can be treated as a coordinate block, we order the coordinate block according to which layer it corresponds to. Note that, in BCD, for each iteration, we choose one coordinate block and optimize the overall training objective with respect to the chosen block. So if we keep choosing the first coordinate block until it fully converged, then keep choosing the second coordinate block, etc., until the last coordinate block fully converge, then this optimization process is the same as the learning process of a sequentially trained layerwise GNN.
We also oberve that, the DGL-GCN can be analog to the synchronous parallel BCD and the LUDGL-GCN can be regard as an analogy of asynchronous parallel BCD. Note that our DGL-GCN and LU-DGL-GCN can be implemented in a parallel fashion, and their key difference is whether all the layers share a consistency and up-to-date information. Therefore, if we make the same analogy of learnable parameters and the coordinate blocks as in the above sequential version, then it would be easy to find the similarity between parallel BCD and our decoupled greedy learning methods. With such analogy, it would allow us to leverage existing theorems for BCD optimization to better understand and analyze the DGL-GCN and LU-DGL-GCN.
6 EMPIRICAL RESULTS
We evaluate our proposed algorithms with the multi-class node classification task. However, it should be noted that the decoupled greedy learning method can also be applied in other graph-related tasks and is not limited to node classification.
We use the following four public datasets for evaluation: cora, citeseer, pubmed (Sen et al., 2008), and OGBN-arxiv (Hu et al., 2020). We briefly introduce these datasets and summarize their statistic in Appendix A.3. We compare our method against several baseline models introduced in Section 2: Full-Batch GCN (Kipf & Welling, 2016), FastGCN (Chen et al., 2018), LADIES (Zou et al., 2019), and LGCN (You et al., 2020). For all the methods, we use the same deepGCN model architecture and set all the hidden-dimension as 128. We follow the public implementations of all the baselines, and use their parameter settings. We conduct the training for 10 times and take the mean and variance of the evaluation results. Further implementation details are provided in Appendix A.3.
We evaluate the performance of different methods with the following evaluation metrics: Accuracy (%): The micro F1-score of the test data at the convergence point. Memory (MiB): The maximum per-GPU memory cost during training. Total Running Time (s): The total training time (exclude validation) before convergence.
We summarize the classification performance results in Table 6, which demonstrates the efficacy of our approach. We can see that, for small-scale datasets such as Cora, Citeseer, Pubmed, our LU-DGL-GCN is the fastest among all the baselines, and its accuracy is only lower than FullBatch GCN but higher than all the other efficient GCN training strategies. Since we only use a 2-layer GCN architecture for these small-scale datasets, we can not see much memory advantage. For the large-scale OGBN-arxiv dataset, we use a deeper 7-layer GCN architecture, then we can find that, compared to Full-Batch GCN, our LU-DGL-GCN is much faster and can save per-GPU memory while only sacrifice minor performance. Compared to sampling-base methods (LADIES and FastGCN), our LU-DGL-GCN has a better accuracy and is faster, while only need slightly more per-GPU memory. Note that for here, we set the sample number as 64 for FastGCN and LADIES, if we increase the sample number, we will obtain a higher accuracy but the time cost and memory cost would be larger. Compared to LGCN which is a sequantially-trained layerwise GCN, our LU-DGLGCN has a clear advantage on the time cost, and can even obtain a little accuracy improvement. Our experimental results together with our analysis in previous sections indicate that our LU-DGLGNN would be very helpful for time and memory limited applications, especially for the scenarios in which we need deep models. Furthermore, to establish the importance of different components of our method, we isolate certain aspects of it via the following ablation study.
Importance of Lazy Update Scheme. We illustrate the advantage of the Lazy Update Scheme and show how the waiting time Tlazy will influence the performance. We use the cora and OGBN-arxiv datasets as examples for small-size and large-size scenarios, and follow previous model architectures and parameter. We run the model for 200 epochs and compare the obtained accuracy, total running time, and memory. We summarize the results in Table 6. We find that, with the lazy update scheme, we can greatly reduce the time cost, and a proper waiting time Tlazy would help to improve the accuracy a little bit since it can alleviate overfitting. In addition, when comparing the accuracy curves of different Tlayer shown in Figure 3, we find that large Tlazy makes the training more stable.
Sequential Training v.s. Parallel Training. Finally, we briefly compare the sequential training and our parallel training of the greedy objective. Again, we use cora and OGBN-arxiv as examples and follow the above experiment settings. As shown in Figure 3, we find that in terms of accuracy, parallel training can quickly catch up with the sequential training, and is less likely to overfit.
7 CONCLUSIONS
In this paper, we focus on the efficiency issue of GNN training in large-scale applications and present a decoupled greedy GNN learning strategy. Our proposed DGL-GNN model achieves updateunlocking by introducing greedy auxiliary objectives during training, and enables parallelization by decoupling the GNN into smaller modules. We also propose the LU-DGL-GNN method, which leverages a lazy-update scheme during training to further improve the model efficiency. We empirically analyze our proposed DGL-GNN model and demonstrate its effectiveness and superior efficiency through a range of experiments. We note that while we introduce our proposed method with deepGCN model and use the semi-supervised node classification task as an example, the DGL-GNN and LU-DGL-GNN methods are not limited to this setting, and can be applied to other GNNs and graph-related downstream tasks. Further, while here we focus on comparing the decoupled approach as an alternative to other sampling-based methods respect to their accuracy and efficiency, these approaches can be regarded as complementary to each other. By combining the decoupled greedy learning method with other scalability-enhancing improvements of GNN training, the computation cost would be further reduced, which poses a promising direction for future work.
A APPENDIX
A.1 COMPLEXITY ANALYSIS FOR BASELINES
In this section, we explain how we compute the memory and time complexity in Table 1.
All the aforementioned methods’s memory cost consists two parts: intermediate embedding matrices storage and weight matrices storage. The weight matrices always needO(LK2) since it has to store L weight matrices with dimension K ×K. The intermediate embedding has different memory cost for different methods. In terms of time complexity, it also consists two parts: the aggregation time cost and transformation time cost. These two parts varies for different methods.
Full-batch GCN stores all the intermediate embedding matrices for all the L layers, and each matrice has N nodes with dimension K, so its memory complexity for intermediate embedding storage would be O(LNK). Therefore, its total memory complexity is O(LNK + LK2). In terms of time complexity, the propagation step which is a sparse-matrix multiplication has time complexity O(‖A‖0K), and the transformation step which is a dense-matrix multiplication has time complexity O(NK2). During the training for L layers and T iterations, the total time complexity would be O(TL‖A‖0K + TLNK2).
For GraphSAGE, it has to storeO(bsL−1node)K-dimension node embeddings, so it has a total memory complexity O(bKsL−1node + LK2). In terms of time complexity, for each batch, to update one node, we have to updateO(sL−1node) activations, each needsO(K∫\ode) for aggregation and K2 for transformation. Therefore, the total time complexity for GraphSAGE is O(bTKsLnode + bTK2s L−1 node).
For VR-GCN, it stores all historical activations, which takes a memory of O(LNK) and will lead to the total memory complexity O(LNK + LK2). The time complexity can be analyzed similarly as GraphSAGE, which would be O(bT D̄KsL−1node + bK2s L−1 node).
For FastGCN, it only has to store b node embeddings in the last layer, and (L − 1)slayer node embeddings in the previous L− 1 layers, so the total memory complexity would be O(bK + (L− 1)Kslayer) +LK
2). In terms of time complexity, we need O(bTKslayer + (L− 1)TKs2layer) for aggregation andO((bTK2+(L−1)Tslayer)K2) for transformation. Note that, we have b < slayer, so we ignore the relatively small terms and get the total memory complexity O(LKslayer + LK2), and total time complexity O(TLKs2layer + TLK2slayer).
For LADIES, its is also a layer-wise sampling method. Though it has different sampling strategy as FastGCN, their memory cost and time cost are the same. There fore, LADIES also has memory complexity O(LKslayer + LK2) and time complexity and O(TLKs2layer + TLK2slayer).
A.2 SUPPLEMENTARY DISCUSSION OF COORDINATE AND BLOCK COORDINATE DESCENT
Coordinate descent (CD) is a classic iterative optimization algorithm that solves an optimization problem by approximately minimizing the objective along each coordinate directions successively.
In each iteration, it would choose one variable, fix the other components, then optimizing the objective with respect to only the single variable. By doing so, we only need to solve a lower dimensional minimization problem at each iteration, which would be easier. CD algorithm has been discussed in various literatures and has been used in applications for a long time (Wright, 2015; Wu et al., 2008; Shi et al., 2016).
Block coordinate descent (BCD) is an extension of CD method. The difference of BCD and the conventional CD is that BCD will do the searching along a coordinate hyperplane instead of a single coordinate direction (Beck & Tetruashvili, 2013), i.e. it groups variables into blocks, and approximately minimizing the objective with respect to only one block of variables at each iteration while fixing the others.
For BCD algorithm, there exists its parallel implementations. As introduced in the work of (Wright, 2015), we can categorize them into two types: synchronous and asynchronous. For synchronous parallel BCD, we partition the computation into pieces and put different pieces on different processors, each processor will update a part of the variables in parallel, then a synchronization step should be conducted to guarantee the consistency of the information shared among all processors before further computation. For asynchronous setting, the difference is we do not have to do the synchronization. As discussed in Section 5, our proposed method can be regarded as analogous to BCD and its aforementioned variants.
A.3 SUPPLEMENTARY FOR EXPERIMENTS
Dataset Statistics We use four benchmark dataset: Cora, Citeseer, Pubmed, Reddit, and OGBNarxiv for the node classification task. The detailed statictics of theses datasets are shown in Table 4.
Hardware We run the experiments on Tesla V100 GPU (16GB).
Baselines Detailed introduction of our baselines can be found in Section 2. As introduced before, as NS methods are in general less competitive than the IS methods, we only compare against the IS strategies.
Model Architecture For Cora, Citeseer, Pubmed, we set the number of layers as 2 for all the methods. For OGBN-arxiv, we set it to be 7.
Parameter Settings For all the methods and datasets, we conduct training for 10 times and record their mean variance. For small datasets (Cora, Citeseer, Pubmed), we set the epoch number as 200, for OGBN-arxiv, we set the epoch number as 500. We choose the model with the best validation performance as convergence point. For the sampling-based methods: FastGCN and LADIES, we set the sample number as 64, increasing this number would improve the accuracy a little bit, but would increase the memory and time cost. | 1. What is the focus of the paper regarding graph convolution nets?
2. What are the strengths and weaknesses of the proposed decoupled greedy learning process?
3. How does the reviewer assess the novelty and applicability of the approach?
4. What are the concerns regarding the experimental results and comparisons with other works?
5. Are there any suggestions for improving the paper or exploring related research directions? | Review | Review
The paper gives up end-to-end training and instead trains graph convolution nets layer by layer. The authors call such layer-by-layer training process decoupled greedy learning (DGL-GNN). In the decoupled greedy learning process, a lazy-update scheme is adopted, that is to exchange information between layers after a certain number of epochs instead of every epoch. The aggregation and transformation process are also separated and performed respectively. Authors show their proposed engineering methods can save a constant fraction of computation time for graph convolution nets, but for the cost of sacrificing some predictive performance. Authors conduct some experiments on semi-supervised node classification data, reporting less memory and runtime but worse performance.
While the paper attempts to study an important problem of scaling graph networks to large graphs, and authors show the proposed methods is capable of saving in terms of memory and runtime by sacrificing predictive performance (which is not surprising at all), it is not suitable for publication at ICLR.
Comments:
Novelty is clearly below the bar of ICLR. The proposed method "layer-by-layer training" is rather trivial.
No theoretical or empirical justification is given for giving up end-to-end training. In fact, there are many easy counter-examples to show this approach would fail. Moreover, another problem with this method is the lack of auxiliary labels for each layer. It is easy to find problems where we do not have such layer wise auxiliary labels, and the proposed method would not work.
Moreover, even by doing layer by layer training, the proposed method cannot solve the scaling issue for billion scale networks, so some sampling is still needed.
This paper is purely experimental; however, the experiments are unsatisfactory:
The reported performance is not competitive. In Table 2 and Table 3, test performance of proposed method is worse than prior works, e.g., LGCN, despite consuming even more memory cost. In this sense, practitioners would simply prefer LGCN to the layer-by-layer training.
The datasets in experiments are very small, e.g., cora only have a few thousand nodes. What is the value of acceleration (while sacrificing performance) on such small graphs?
More graph neural network architectures need to be tested to show the effectiveness of the method: 1. graph attention network, 2. graph isomorphism network, 3. graphsage, 4. chebyshev GCN.
Missing link prediction benchmarks.
Missing baselines: SIGN, GraphSAINT.
Grammar mistakes: then after it get fully converged -> gets
optimize next layer -> the next
we do the same things -> thing
which scales exponentially with the number of layers -> factual error |
ICLR | Title
Amortized Nesterov's Momentum: Robust and Lightweight Momentum for Deep Learning
Abstract
Stochastic Gradient Descent (SGD) with Nesterov’s momentum is a widely used optimizer in deep learning, which is observed to have excellent generalization performance. In this work, we propose Amortized Nesterov’s Momentum, which is a special variant of Nesterov’s momentum. Compared with Nesterov’s momentum, our new momentum has more robust iterates and higher efficiency. Our empirical results show that it achieves faster early convergence and comparable final generalization performance with little-to-no tuning. Just like Nesterov’s method, the new schemes are also proved optimal in general convex setting. Our analysis sheds light on the understanding of the new variant.
1 INTRODUCTION
In recent years, Gradient Descent (GD) (Cauchy, 1847) and its variants have been widely used to solve large scale machine learning problems. Among them, Stochastic Gradient Descent (SGD) (Robbins & Monro, 1951), which replaces gradient with an unbiased stochastic gradient estimator, is a popular choice of optimizer especially for neural network training which requires lower precision. Sutskever et al. (2013) found that using SGD with Nesterov’s momentum (Nesterov, 1983; 2013b), which was originally designed to accelerate deterministic convex optimization, achieves substantial speedups for training neural networks. This finding essentially turns SGD with Nesterov’s momentum into the benchmarking method of neural network design, especially for classification tasks (He et al., 2016b;a; Zagoruyko & Komodakis, 2016; Huang et al., 2017). It is observed that in these tasks, the momentum technique plays a key role in achieving good generalization performance.
Adaptive methods (Duchi et al., 2011; Kingma & Ba, 2015; Tieleman & Hinton, 2012; Reddi et al., 2018), which are also becoming increasingly popular in the deep learning community, diagonally scale the gradient to speed up training. However, Wilson et al. (2017) show that these methods always generalize poorly compared with SGD with momentum (both classical momentum (Polyak, 1964) and Nesterov’s momentum).
In this work, we introduce Amortized Nesterov’s Momentum, which is a special variant of Nesterov’s momentum. From users’ perspective, the new momentum has only one additional integer hyper-parameter m to choose, which we call the amortization length. Learning rate and momentum parameter of this variant are strictly aligned with Nesterov’s momentum and by choosing m = 1, it recovers Nesterov’s momentum. This paper conducts an extensive study based on both empirical evaluation and convex analysis to identify the benefits of the new variant (or from users’ angle, to set m apart from 1). We list the advantages of Amortized Nesterov’s Momentum as follows:
• Increasing m improves robustness1. This is an interesting property since the new momentum not only provides acceleration, but also enhances the robustness. We provide an understanding of this property by analyzing the relation between convergence rate andm in the convex setting. • Increasing m reduces (amortized) iteration complexity. • A suitably chosen m boosts the convergence rate in the early stage of training and produces
comparable final generalization performance.
1In this work, robustness refers to the probability of an optimizer significantly deviating from its expected performance, which can be reflected by the deviations of accuracy or loss in the training process over multiple runs that start with the same initial guess.
• It is easy to tune m. The performances of the methods are stable for a wide range of m and we prove that the methods converge for any valid choice of m in the convex setting. • Ifm is not too large, the methods obtain the optimal convergence rate in general convex setting,
just like Nesterov’s method.
The new variant does have some minor drawbacks: it requires one more memory buffer, which is acceptable in most cases, and it shows some undesired behaviors when working with learning rate schedulers, which can be addressed by a small modification. Considering these pros and cons, we believe that the proposed variant can benefit many large-scale deep learning tasks.
Our high level idea is simple: the stochastic Nesterov’s momentum can be unreliable since it is provided only by the previous stochastic iterate. The iterate potentially has large variance, which may lead to a false momentum that perturbs the training process. We thus propose to use the stochastic Nesterov’s momentum based on several past iterates, which provides robust acceleration. In other words, instead of immediately using an iterate to provide momentum, we put the iterate into an “amortization plan” and use it later.
2 PRELIMINARIES: SGD AND NESTEROV’S MOMENTUM
We start with a review of SGD and Nesterov’s momentum. We discuss some subtleties in the implementation and evaluation, which contributes to the interpretation of our methods.
Notations In this paper, we use x ∈ Rd to denote the vector of model parameters. ‖·‖ and 〈·, ·〉 denote the standard Euclidean norm and inner product, respectively. Scalar multiplication for v ∈ Rd and β ∈ R is denoted as β ·v. f : Rd → R denotes the loss function to be minimized and∇f(x) represents the gradient of f evaluated at x. We denote the unbiased stochastic gradient estimator of ∇f(x) as ∇fi(x) with the random variable i independent of x (e.g., using mini-batch). We use x0 ∈ Rd to denote the initial guess.
SGD SGD has the following simple iterative scheme, where γ ∈ R denotes the learning rate:
xk+1 = xk − γ · ∇fik(xk), for k ≥ 0.
Nesterov’s momentum The original Nesterov’s accelerated gradient (with constant step) (Nesterov, 1983; 2013b) has the following scheme2 (y ∈ Rd, η, β ∈ R and y0 = x0):
yk+1 = xk − η · ∇f(xk), xk+1 = yk+1 + β · (yk+1 − yk), for k ≥ 0,
(1)
where we call β · (yk+1 − yk) the momentum. By simply replacing ∇f(xk) with ∇fik(xk), we obtain the SGD with Nesterov’s momentum, which is widely used in deep learning. To make this point clear, recall that the reformulation in Sutskever et al. (2013) (scheme (2), also the Tensorflow (Abadi et al., 2016) version) and the PyTorch (Paszke et al., 2017) version (scheme (3)) have the following schemes (v, vpt ∈ Rd and v0 = vpt0 = 0): for k ≥ 0,
(2) { vk+1 = β · vk − η · ∇fik(yk + β · vk), yk+1 = yk + vk+1.
(3) { vptk+1 = β · v pt k +∇fik(xk),
xk+1 = xk − η · (β · vptk+1 +∇fik(xk)).
Here the notations are modified based on their equivalence to scheme (1). It can be verified that schemes (2) and (3) are equivalent to (1) through vk = β−1 ·(xk−yk) and vptk = η−1β−1 ·(yk−xk), respectively (see Defazio (2018) for other equivalent forms of scheme (1)).
Interestingly, both PyTorch and Tensorflow3 track the values {xk}, which we refer to as M-SGD. This choice allows a consistent implementation when wrapped in a generic optimization layer (Defazio, 2018). However, the accelerated convergence rate (in the convex case) is built upon {yk} (Nesterov, 2013b) and {xk} may not possess such a theoretical improvement. We use OM-SGD to refer to the Original M-SGD that outputs {yk}.
2We exchange the notations of x and y in Nesterov (2013b). 3Tensorflow tracks the values {yk + β · vk} = {xk}.
SGD and M-SGD In order to study the features of momentum, in this work, we regard momentum as an add-on to plain SGD, which corresponds to fixing the learning rates4 γ = η. From the interpretation in Allen-Zhu & Orecchia (2017), η represents the learning rate for the gradient descent “inside” Nesterov’s method. To introduce the evaluation metrics of this paper, we report the results of training ResNet34 (He et al., 2016b) on CIFAR-10 (Krizhevsky et al., 2009) (our basic case study) using SGD and M-SGD in Figure 1. In this paper, all the multiple runs start with the same initial guess x0. Figure 1a shows that Nesterov’s momentum hurts the convergence in the first 60 epochs but accelerates the final convergence, which verifies the importance of momentum for achieving high accuracy. Figure 1b depicts the robustness of M-SGD and SGD, which suggests that adding Nesterov’s momentum slightly increases the uncertainty in the training process of SGD.
Train-batch loss vs. Full-batch loss In Figure 1c, train-batch loss stands for the average of batch losses forwarded in an epoch, which is commonly used to indicate the training process in deep learning. Full-batch loss is the average loss over the entire training dataset evaluated at the end of each epoch. In terms of optimizer evaluation, full-batch loss is much more informative than trainbatch loss as it reveals the robustness of an optimizer. However, full-batch loss is too expensive to evaluate and thus we only measure it on small datasets. On the other hand, test accuracy couples optimization and generalization, but since it is also evaluated at the end of the epoch, its convergence is similar to full-batch loss. Considering the basic usage of momentum in deep learning, we mainly use test accuracy to evaluate optimizers. We provide more discussion on this issue in Appendix C.2.
M-SGD vs. OM-SGD We also include OM-SGD in Figure 1a. In comparison, the final accuracies of M-SGD and OM-SGD are 94.606%± 0.152% and 94.728%± 0.111% with average deviations at 1.040% and 0.634%, respectively. This difference can be explained following the interpretation in Hinton (2012) that {xk} are the points after “jump” and {yk} are the points after “correction”.
3 AMORTIZED NESTEROV’S MOMENTUM
In this section, we formally introduce SGD with Amortized Nesterov’s Momentum (AM1-SGD) in Algorithm 1 with the following remarks:
Options It can be verified that if m = 1, AM1-SGD with Option I degenerates to M-SGD and Option II corresponds to OM-SGD. Just like the case for M-SGD and OM-SGD, the accelerated convergence rate is built upon Option II while Option I is easier to be implemented in a generic optimization layer5. Intuitively, Option I is SGD with amortized momentum and Option II applies an m-iterations tail averaging on Option I.
4Ma & Yarats (2019) observed that when effective learning rates γ = η(1 − β)−1 are fixed, M-SGD and SGD have similar performance. We provide a discussion on this observation in Appendix C.1.
5To implement Option II, we can either maintain another identical network for the shifted point x̃ or temporarily change the network parameters in the evaluation phase.
Algorithm 1 AM1-SGD Input: Initial guess x0, learning rate η, momentum β, amortization length m, iteration number K. Initialize: x← x0, x̃← x0, x̃+ ← 0 {a running average}.
1: for k = 0, . . . ,K − 1 do 2: x← x− η · ∇fik(x). 3: x̃+ ← x̃+ + 1m · x. 4: if (k + 1) mod m = 0 then 5: x← x+ β · (x̃+ − x̃). {adding amortized momentum} 6: x̃← x̃+, x̃+ ← 0. 7: end if 8: end for
Output: Option I: x, Option II: x̃. * The symbol ‘←’ denotes assignment.
Efficiency We can improve the efficiency of Algorithm 1 by maintaining a running scaled momentum ṽ+ , m · (x̃+ − x̃) instead of the running average x̃+, by replacing the following steps in Algorithm 1:
Initialize: x← x0, x̃← x0, ṽ+ ← −m · x0, Step 3: ṽ+ ← ṽ+ + x. Step 5: x← x+ (β/m) · ṽ+. Step 6: x̃← x̃+ (1/m) · ṽ+, ṽ+ ← −m · x̃.
Then, in one m-iterations loop, for each of the first m − 1 iterations, AM1-SGD requires 1 vector addition and 1 scaled vector addition. At the m-th iteration, it requires 1 vector addition, 1 scalarvector multiplication and 3 scaled vector additions. In comparison, M-SGD (standard PyTorch) requires 1 vector addition, 1 (in-place) scalar-vector multiplication and 2 scaled vector additions per iteration. Thus, as long as m > 2, AM1-SGD has lower amortized cost than M-SGD. For memory complexity, AM1-SGD requires one more auxiliary buffer than M-SGD.
Tuning m We did a parameter sweep for m in our basic case study. We plot the final and the average deviation of test accuracies over 5 runs againstm in Figure 2a. Note that m=1 corresponds to the results of M-SGD and OM-SGD, which are already given in Figure 1. From this empirical result, m introduces a trade-off between final accuracy and robustness (the convergence behaviors can be found in Appendix A.1). Figure 2a suggests that m= 5 is a good choice for this task. For simplicity, and also as a recommended setting, we fix m=5 for the rest of experiments in this paper.
A momentum that increases robustness To provide a stronger justification, we ran 20 seeds with m = 5 in Figure 2b and the detailed data are given in Figure 3 & Table 1. The results show that the amortized momentum significantly increases the robustness. Intuitively, the gap between Option I and Option II can be understood as the effect of tail averaging. However, the large gap between Option I and SGD is somewhat mysterious: what Option I does is to inject a very large momentum6 into SGD every m iterations. It turns out that this momentum not only provides acceleration, but also helps the algorithm become more robust than SGD. This observation basically differentiates AM1-SGD from a simple interpolation in-between M-SGD and SGD.
6Amortized momentum β ·(x̃+−x̃) is expected to be much large than Nesterov’s momentum β ·(yk+1−yk).
Learning rate scheduler issue We observed that when we use schedulers with a large decay factor and the momentum β is too large for the task (e.g., 0.995 for the task of this section), there would be a performance drop after the learning rate reduction. We believe that it is caused by the different cardinalities of iterates being averaged in x̃+, which leads to a false momentum. This issue is resolved by restarting the algorithm after each learning rate reduction inspired by (O’donoghue & Candes, 2015). We include more discussion and evidence in Appendix A.4.
3.1 AM2-SGD: A VARIANT WITH IDENTICAL ITERATIONS
Algorithm 2 AM2-SGD
Input: Initial guess x0, amortization lengthm, a point table φ = [φ1 · · · φm] ∈ Rd×m, learning rate η, momentum β, iteration number K. Initialize: φ0j = x0,∀j ∈ [m]*. {jk | jk ∈ [m]} K−1 k=0 is a sequence of uniformly random indexes.
If Option II is used, φ̄0 = x0. {a running average for the point table φ} 1: for k = 0, . . . ,K − 1 do 2: φk+1jk = xk − η · ∇fik(xk) and keep other entries unchanged (i.e., φ k+1 j = φ k j for j 6= jk). 3: xk+1 = φ k+1 jk + β · (φk+1jk+1 − φ k jk
). {adding amortized momentum} 4: if Option II then φ̄k+1 = φ̄k + 1m · ( φk+1jk − φ k jk ) .
5: end for Output: Option I (not recommended): xK , Option II: φ̄K . * [m] denotes the set {1, . . . ,m}.
While enjoying an improved efficiency, AM1-SGD does not have identical iterations7, which to some extent limits its extensibility to other settings (e.g., asynchronous setting). In this section, we propose a variant of Amortized Nesterov’s Momentum (AM2-SGD, Algorithm 2) to address this problem. To show the characteristics of AM2-SGD, we make the following remarks:
Trading memory for extensibility In expectation, the point table φ stores the most recent m iterations and thus the output φ̄K is an m-iterations tail average, which connects to AM1-SGD. The relation between AM1-SGD and AM2-SGD resembles that of SVRG (Johnson & Zhang, 2013) and SAGA (Defazio et al., 2014), the most popular methods in finite-sum convex optimization: to reuse the information from several past iterates, we can either maintain a “snapshot” that aggregates the information or keep the iterates in a table. A side-by-side comparison is given in Section 4.
Options and convergence As in the case of AM1-SGD, if m = 1, AM2-SGD with Option I corresponds to M-SGD and Option II is OM-SGD. In our preliminary experiments, the convergence of AM2-SGD is similar to AM1-SGD and it also has the learning rate scheduler issue. In our preliminary experiments (can be found in Appendix A), we observed that Option I is consistently worse than Option II and it does not seem to benefit from increasingm. Thus, we do not recommend using Option I. We also set m = 5 for AM2-SGD for its evaluation due to the similarity.
7For AM1-SGD, the workload varies for different iteration k due to the if-clause at Step 4.
Additional randomness {jk} In our implementation, at each iteration, we sample an index in [m] as jk+1 and obtain the stored index jk. We observed that with Option I, AM2-SGD has much larger deviations than AM1-SGD, which we believe is caused by the additional random indexes {jk}.
4 CONVERGENCE RESULTS
The original Nesterov’s accelerated gradient is famous for its optimal convergence rates for solving convex problems. In this section, we analyze the convergence rates for AM1-SGD and AM2-SGD in the convex case, which explicitly model the effect of amortization (i.e., m). While these rates do not hold for deep learning problems in general, they help us understand the observed convergence behaviors of the proposed methods, especially on how they differ from M-SGD (m = 1). Moreover, the analysis also provides intuition on tuning m. Since the original Nesterov’s method is deterministic (Nesterov, 1983; 2013b), we follow the setting of its stochastic variants (Lan, 2012; Ghadimi & Lan, 2012), in which Nesterov’s acceleration also achieves the optimal rates.
We consider the following convex composite problem (Beck & Teboulle, 2009; Nesterov, 2013a):
min x∈X
{ F (x) , f(x) + h(x) } , (4)
whereX ⊆ Rd is a non-empty closed convex set and h is a proper convex function with its proximal operator proxαh(·)8 available. We impose the following assumptions on the regularity of f and the stochastic oracle∇fi (identical to the ones in Ghadimi & Lan (2012) with µ = 0): Assumptions. For some L ≥ 0,M ≥ 0, σ ≥ 0,
(a) 0 ≤ f(y)− f(x)− 〈∇f(x), y − x〉 ≤ L2 ‖y − x‖ 2 +M ‖y − x‖ ,∀x, y ∈ X.9
(b) Ei [∇fi(x)] = ∇f(x),∀x ∈ X. (c) Ei [ ‖∇fi(x)−∇f(x)‖2 ] ≤ σ2,∀x ∈ X.
The notation Eik [ · ] is E [ · | (i0, . . . , ik−1)] for a random process i0, i1, . . .. These assumptions cover several important classes of convex problems. For example, (a) covers the cases of f being L-smooth (M = 0) or L0-Lipschitz continuous (M = 2L0, L = 0) convex functions and if σ = 0 in (c), the assumptions cover several classes of deterministic convex programming problems. We denote x? ∈ X as a solution to problem (4) and x0 ∈ X as the initial guess. Unlike its usage in deep learning, the momentum parameter β is always a variable in general convex analysis. For the simplicity of analysis, we reformulate AM1-SGD (Algorithm 1) and AM2-SGD (Algorithm 2) into the following schemes10(z ∈ X,α ∈ R):
AM1-SGD (reformulated, proximal)
Initialize: x̃0 = z0 = x0, S = K/m. 1: for s = 0, . . . , S − 1 do 2: for j = 0, . . . ,m− 1 do 3: k = sm+ j. 4: xk = (1− βs) · zk + βs · x̃s. 5: zk+1 = proxαsh {zk − αs ·∇fik(xk)}. 6: (xk+1 = (1− βs) · zk+1 + βs · x̃s.) 7: end for 8: x̃s+1 = 1 m ∑m j=1 xsm+j .
9: end for Output: x̃S .
AM2-SGD (reformulated, proximal)
Initialize: z0 = φ0j = x0,∀j ∈ [m]. 1: for k = 0, . . . ,K − 1 do 2: Sample jk uniformly in [m]. 3: xjkk = (1− βk) · zk + βk · φkjk . 4: zk+1 = proxαkh {zk − αk ·∇fik(x jk k )}.
5: φk+1jk = (1− βk) · zk+1 + βk · φ k jk . 6: end for
Output: φ̄K = 1m ∑m j=1 φ K j .
We show in Appendix B.1 that when h ≡ 0 and β is a constant, the reformulated schemes AM1SGD and AM2-SGD are equivalent to Algorithm 1 and Algorithm 2 through αs = η(1−βs)−1 and
8∀x ∈ Rd, proxαh(x) , argminu∈X { 1 2 ‖u− x‖2 + αh(u) } , see Parikh et al. (2014).
9When M > 0, f is not necessarily differentiable and we keep using the notation ∇f(x) to denote an arbitrary subgradient of f at x for consistency.
10For simplicity, we assume K is divisible by m.
αk = η(1 − βk)−1. These reformulations are basically how Nesterov’s momentum was migrated into deep learning (Sutskever et al., 2013). Then we establish the convergence rates for AM1-SGD and AM2-SGD as follows. All the proofs in this paper are given in Appendix B.2. Theorem 1. For the reformulated AM1-SGD, suppose we choose
βs = s
s+ 2 and αs = λ1 L(1− βs)
with λ1 = min { 2
3 , L ‖x0 − x?‖ 2 √ m √ σ2 +M2(S + 1) 3 2
} . (5)
Then,
(a) The output x̃S satisfies
E [F (x̃S)]− F (x?) ≤ 3Lm ‖x0 − x?‖2
(K +m)2 +
8 ‖x0 − x?‖ √ σ2 +M2√
K +m , K0(m).
(b) If the variance has a “light tail”, i.e., Ei [ exp { ‖∇fi(x)−∇f(x)‖2/σ2 }] ≤exp{1},∀x ∈
X , and X is compact, denoting DX , maxx∈X ‖x− x?‖, for any Λ ≥ 0, we have
Prob { F (x̃S)− F (x?) ≤ K0(m) + 4Λσ ( 3 ‖x0 − x?‖+ √ 6DX ) 3 √ K +m } ≥ 1− ( exp{−Λ2/3}+ exp{−Λ} ) .
Remarks: (a) Regarding K0(m), its minimum is obtained at either m = 1 or m = K. Note that for AM1-SGD,m is strictly constrained in {1, . . . ,K}. It can be verified that whenm = K, AM1-SGD becomes the modified mirror descent SA (Lan, 2012), or under the Euclidean setting, the SGD that outputs the average of the whole history, which is rarely used in practice. In this case, the convergence rate in Theorem 1a becomes the corresponding O(L/K + (σ +M)/ √ K) (cf. Theorem 1 in Lan (2012)). Thus, we can regard AM1-SGD as a smooth transition between AC-SA and the modified mirror descent SA. (b) The additional compactness and “light tail” assumptions are similarly required in Nemirovski et al. (2009); Lan (2012); Ghadimi & Lan (2012). Recently, Juditsky et al. (2019) established similar bounds under weaker assumptions by truncating the gradient. However, as indicated by the authors, their technique cannot be used for accelerated algorithms due to the accumulation of bias.
Understandings: Theorem 1a gives the expected performance in terms of full-batch loss F (x̃) − F (x?), from which the trade-off of m is clear: Increasing m improves the dependence on variance σ but deteriorates the O(L/K2) term (i.e., the acceleration). Based on this trade-off, we can understand the empirical results in Figure 2b: the faster convergence in the early stage could be the result of a better control on σ and the slightly lowered final accuracy is possibly caused by the reduced acceleration effect. Theorem 1b provides the probability of the full-batch loss deviating from its expected performance (i.e., K0(m)). It is clear that increasing m leads to smaller deviations with the same probability, which sheds light on the understanding of the increased robustness observed in Figure 2. Since the theorem is built on the full-batch loss, we did an experiments based on this
metric in Figure 4 & Table 2. Here we choose training a smaller ResNet18 with pre-activation (He et al., 2016a) on CIFAR-10 as the case study (the test accuracy is reported in Appendix A.5).
For AM2-SGD, we only give the expected convergence results as follows. Theorem 2. For the reformulated AM2-SGD, if we choose
βk = k/m
k/m+ 2 and αk = λ2 L(1− βk) with λ2 = min 23 , L ‖x0 − x?‖√2m(σ +M) (K−1m + 2) 32 ,
the output φ̄K satisfies E [ F (φ̄K) ] − F (x?)≤ 4(m2−m) ( F (x0)−F (x?) ) +3Lm ‖x0−x?‖2
(K + 2m− 1)2 +
4 √
2 ‖x0−x?‖ (σ+M)√ K + 2m− 1 .
Remark: In comparison with Theorem 1a, Theorem 2 has an additional term F (x0)− F (x?) in the upper bound, which is inevitable. This difference comes from different restrictions on the choice of m. For AM2-SGD,m ≥ 1 is the only requirement. Since it is impossible to letm K to obtain an improved rate, this additional term is inevitable. As a sanity check, we can let m → ∞ to obtain a point table with almost all x0, and then the upper bound becomes exactly F (x0)− F (x?). In some cases, there exists an optimal choice of m > 1 in Theorem 2. However, the optimal choice could be messy and thus we omit the discussion here.
Understanding: Comparing the rates, we see that when using the same m, AM2-SGD has slightly better dependence on σ, which is related to the observation in Figure 5 that AM2-SGD is always slightly faster than AM1-SGD. This difference is suggesting that randomly incorporating past iterates beyond m iterations helps. If m = O(1), Theorems 1 and 2 establish the optimal O(L/K2 + (σ + M)/ √ K) rate in the convex setting (see Lan (2012) for optimality), which verifies AM1-SGD and AM2-SGD as variants of the Nesterov’s method (Nesterov, 1983; 2013b). From the above analysis, the effect of m can be understood as trading acceleration for variance control. However, since both acceleration and variance control boost the convergence speed, the reduced final performance observed in the CIFAR experiments may not always be the case as will be shown in Figure 5 and Table 3.
Connections with Katyusha Our original inspiration of AM1-SGD comes from the construction of Katyusha (Allen-Zhu, 2018), the recent breakthrough in finite-sum convex optimization, which uses a previously calculated “snapshot” point to provide momentum, i.e., Katyusha momentum. AM1-SGD also uses an aggregated point to provide momentum and it shares many structural similarities with Katyusha. We refer the interested readers to Appendix B.3.
5 PERFORMANCE EVALUATION
In this section, we evaluate AM1-SGD and AM2-SGD on more deep learning tasks. Our goal is to show their potentials of serving as alternatives for M-SGD. Regarding the options: for AM1-SGD, Option I is a nice choice, which has slightly better final performance as shown in Table 1; for AM2SGD, Option I is not recommended as mentioned before. Here we choose to evaluate Option II for both methods for consistency, which also corresponds to the analysis in Section 4. AM1-SGD and AM2-SGD use exactly the same values for (η, β) as M-SGD, which was tuned to optimize the performance of M-SGD. We set m = 5 for AM1-SGD and AM2-SGD.
We trained ResNet50 and ResNet152 (He et al., 2016b) on the ILSVRC2012 dataset (“ImageNet”) (Russakovsky et al., 2015) shown in Figure 5b. For this task, we used 0.1 initial learning rate and 0.9 momentum for all methods, which is a typical choice. We performed a restart after each learning rate reduction as discussed in Appendix A.4. We believe that this helps the training process and also does not incur any additional overhead. We report the final accuracy in Table 3.
We also did a language model experiment on Penn Treebank dataset (Marcus et al., 1993). We used the LSTM (Hochreiter & Schmidhuber, 1997) model defined in Merity et al. (2017) and followed the experimental setup in its released code. We only changed the learning rate and momentum in
the setup. The baseline is SGD+ASGD11 (Polyak & Juditsky, 1992) with constant learning rate 30 as used in Merity et al. (2017). For the choice of (η, β), following Lucas et al. (2019), we chose β = 0.99 and used the scheduler that reduces the learning rate by half when the validation loss has not decreased for 15 epochs. We swept η from {5, 2.5, 1, 0.1, 0.01} and found that η = 2.5 resulted in the lowest validation perplexity for M-SGD. We thus ran AM1-SGD and AM2-SGD with this (η, β) and m = 5. Due to the small decay factor, we did not restart AM1-SGD and AM2-SGD after learning rate reductions. The validation perplexity curve is plotted in Figure 5a. We report validation perplexity and test perplexity in Table 3. This experiment is directly comparable with the one in Lucas et al. (2019).
Extra results are provided in the appendices for interested readers: the robustness when using large β (Appendix A.2), a CIFAR-100 experiment (Appendix A.6) and comparison with classical momentum (Polyak, 1964), AggMo (Lucas et al., 2019) and QHM (Ma & Yarats, 2019) (Appendix A.3).
6 CONCLUSIONS
We presented Amortized Nesterov’s Momentum, which is a special variant of Nesterov’s momentum that utilizes several past iterates to provide the momentum. Based on this idea, we designed two different realizations, namely, AM1-SGD and AM2-SGD. Both of them are simple to implement with little-to-no additional tuning overhead over M-SGD. Our empirical results demonstrate that switching to AM1-SGD and AM2-SGD produces faster early convergence and comparable final generalization performance. AM1-SGD is lightweight and has more robust iterates than M-SGD, and thus can serve as a favorable alternative to M-SGD in large-scale deep learning tasks. AM2-SGD could be favorable for more restrictive tasks (e.g., asynchronous training) due to its extensibility and good performance. Both the methods are proved optimal in the convex case, just like M-SGD. Based on the intuition from convex analysis, the proposed methods are trading acceleration for variance control, which provides hints for the hyper-parameter tuning.
11SGD+ASGD is to run SGD and switch to averaged SGD (ASGD) when a threshold is met.
Appendices
A Extra Experimental Results 14
A.1 The effect of m on convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
A.2 Robustness on large momentum parameters . . . . . . . . . . . . . . . . . . . . . 15
A.3 Comparison with other momentum . . . . . . . . . . . . . . . . . . . . . . . . . . 15
A.4 Issues with learning rate schedulers . . . . . . . . . . . . . . . . . . . . . . . . . 17
A.5 Test accuracy results of Figure 4 & Table 2 . . . . . . . . . . . . . . . . . . . . . 17
A.6 CIFAR-100 experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
A.7 A sanity check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
B Missing parts in Section 4 19
B.1 The reformulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
B.2 Proofs of Theorem 1 and Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . 20
B.2.1 Proof of Lemma 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
B.2.2 Proof of Theorem 1a . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
B.2.3 Proof of Theorem 1b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
B.2.4 Proof of Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
B.3 Connections between AM1-SGD and Katyusha . . . . . . . . . . . . . . . . . . . 27
C Miscellanies 28
C.1 Comparison of SGD and M-SGD . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
C.2 Training evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
D Experimental Setup 29
D.1 Classification Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
D.2 Language Model Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
A EXTRA EXPERIMENTAL RESULTS
In this appendix, we provide more experimental results to further evaluate the Amortized Nesterov’s Momentum. Table 4 shows the detailed data of the parameter sweep experiments, where the convergence curves of these results are given in Appendix A.1. In Appendix A.2, we compare the robustness of AM1-SGD and M-SGD on large momentum parameters. In Appendix A.3, we empirically compare the Amortized Nesterov’s Momentum with classical momentum (Polyak, 1964), aggregated momentum (Lucas et al., 2019) and quasi-hyperbolic momentum (Ma & Yarats, 2019). We discuss the issues with learning rate schedulers in Appendix A.4. We report the test accuracy results of the ResNet18 experiment (in Section 4) in Appendix A.5. A CIFAR-100 experiment is provided in Appendix A.6. We also provide a sanity check for our implementation in Appendix A.7.
METHOD DESCRIPTION FINAL ACCURACY Avg. STD
A.1 THE EFFECT OF m ON CONVERGENCE
We show in Figure 6 how m affects the convergence of test accuracy. The results show that increasing m speeds up the convergence in the early stage. While for AM1-SGD the convergences of Option I and Option II are similar, AM2-SGD with Option II is consistently better than with Option I in this experiment. It seems that AM2-SGD with Option I does not benefit from increasing m and the algorithm is not robust. Thus, we do not recommend using Option I for AM2-SGD.
A.2 ROBUSTNESS ON LARGE MOMENTUM PARAMETERS
We compare the robustness of M-SGD and AM1-SGD when β is large in Figure 7 & Table 5. For fair comparison, AM1-SGD uses Option I. As we can see, the STD error of M-SGD scales up significantly when β is larger and the performance is more affected by a large β compared with AM1-SGD.
A.3 COMPARISON WITH OTHER MOMENTUM
In this section, we compare AM1-SGD (Option I) with classical momentum (Polyak, 1964), AggMo (Lucas et al., 2019) and QHM (Ma & Yarats, 2019) in our basic case study (training ResNet34 on
CIFAR-10). Since we are not aware of what makes a fair comparison with these methods (e.g., it is not clear what is the effective learning rate for AM1-SGD), we compare them based on the default hyper-parameter settings suggested by their papers.
Classical Momentum The SGD with classical momentum (CM-SGD) that is widely used in deep learning has the following scheme (standard PyTorch) (vcm ∈ Rd, vcm0 = 0):
vcmk+1 = β · vcmk +∇fik(xk), xk+1 = xk − η · vcmk+1, for k ≥ 0.
CM-SGD with its typical hyper-parameter settings (η0 = 0.1, β = 0.9) is observed to achieve similar generalization performance as M-SGD. However, CM-SGD is more unstable and prone to oscillations (Lucas et al., 2019), which makes it less robust than M-SGD as shown in Table 6.
Aggregated Momentum (AggMo) AggMo combines multiple momentum buffers, which is inspired by the passive damping from physics literature (Lucas et al., 2019). AggMo uses the following update rules (for t = 1, . . . , T , v(t) ∈ Rd, v(t)0 = 0):
v (t) k+1 = β (t) · v(t)k −∇fik(xk), for t = 1, . . . , T,
xk+1 = xk + η T · T∑ t=1 v (t) k+1, for k ≥ 0.
We used the exponential hyper-parameter setting recommended in the original work with the scalefactor a = 0.1 fixed, β(t) = 1− at−1, for t = 1, . . . , T and choosing T in {2, 3, 4}. We found that T = 2 gave the best performance in this experiment. As shown in Figure 8 & Table 6, with the help of passive damping, AggMo is more stable and robust compared with CM-SGD.
Quasi-hyperbolic Momentum (QHM) Ma & Yarats (2019) introduce the immediate discount factor ν ∈ R for the momentum scheme, which results in the QHM update rules (α ∈ R, vqh ∈ Rd, vqh0 = 0):
vqhk+1 = β · v qh k + (1− β) · ∇fik(xk),
xk+1 = xk − α · (ν · vqhk+1 + (1− ν) · ∇fik(xk)), for k ≥ 0. Here we used the recommended hyper-parameter setting for QHM (α0 = 1.0, β = 0.999, ν = 0.7).
Figure 8 shows that AM1-SGD, AggMo and QHM achieve faster convergence in the early stage while CM-SGD has the highest final accuracy. In terms of robustness, huge gaps are observed when comparing AM1-SGD with the remaining methods in Table 6. Note that AM1-SGD is more efficient than both QHM and AggMo, and is as efficient as CM-SGD.
We also plot the convergence of train-batch loss for all the methods in Figure 9. Despite of showing worse generalization performance, both QHM and AggMo perform better on reducing the trainbatch loss in this experiment, which is consistent with the results reported in Ma & Yarats (2019); Lucas et al. (2019).
A.4 ISSUES WITH LEARNING RATE SCHEDULERS
We show in Figure 10 that when β is large for the task, using step learning rate scheduler with decay factor 10, a performance drop is observed after each reduction. Both Option I and Option II have this issue and the curves are basically identical. Here we only use Option II. We fix this issue by performing a restart after each learning rate reduction (labeled with ‘+’). We plot the train-batch loss here because we find the phenomenon is clearer in this way. If β = 0.9, there is no observable performance drop in this experiment.
For smooth-changing schedulers such as the cosine annealing scheduler (Loshchilov & Hutter, 2016), the amortized momentum works well as shown in Figure 11.
A.5 TEST ACCURACY RESULTS OF FIGURE 4 & TABLE 2
We report the test accuracy results of the experiments in Section 4 in Figure 12 & Table 7. These results are reminiscent of the ResNet34 experiments (Figure 3 & Table 1).
A.6 CIFAR-100 EXPERIMENT
We report the results of training DenseNet121 (Huang et al., 2017) on CIFAR-100 in Figure 13, which shows that both AM1-SGD and AM2-SGD perform well before the final learning rate reduction. However, the final accuracies are lowered around 0.6% compared with M-SGD. We also notice that SGD reduces the train-batch loss at an incredibly fast rate and the losses it reaches are consistently lower than other methods in the entire 300 epochs. However, this performance is not
reflected in the convergence of test accuracy. We believe that this phenomenon suggests that the DenseNet model is actually “overfitting” M-SGD (since in the ResNet experiments, M-SGD always achieves a lower train loss than SGD after the final learning rate reduction).
A.7 A SANITY CHECK
When m = 1, both AM1-SGD and AM2-SGD are equivalent to M-SGD, we plot their convergence in Figure 14 as a sanity check (the detailed data is given in Table 4).
We observed that when m = 1, both AM1-SGD and AM2-SGD have a lower STD error than M-SGD. We believe that it is because they both maintain the iterates without scaling, which is numerically more stable than M-SGD (M-SGD in standard PyTorch maintains a scaled buffer, i.e., vptk = η −1β−1 · (yk − xk)).
B MISSING PARTS IN SECTION 4
B.1 THE REFORMULATIONS
When h ≡ 0 and β is a constant, we do the reformulations by eliminating the sequence {zk}. For the reformulated AM2-SGD,
xjkk = (1− β) · zk + β · φ k jk ,
zk+1 = zk − α · ∇fik(x jk k ),
φk+1jk = (1− β) · zk+1 + β · φ k jk ,(
x jk+1 k+1 = (1− β) · zk+1 + β · φ k+1 jk+1
) .
The reformulated AM2-SGD
α(1− β) = η Eliminate {zk} =========⇒
φk+1jk = x jk k − η · ∇fik(x jk k ),
x jk+1 k+1 = φ k+1 jk + β · ( φk+1jk+1 − φ k jk ) .
Algorithm 2
For the reformulated AM1-SGD, when h ≡ 0, the inner loops are basically SGD,
xk = (1− β) · zk + β · x̃s, zk+1 = zk − α · ∇fik(xk),
(xk+1 = (1− β) · zk+1 + β · x̃s.)
α(1− β) = η Eliminate {zk} =========⇒ xk+1 = xk − η · ∇fik(xk).
At the end of each inner loop (i.e., when (k + 1) mod m = 0), we have
x(s+1)m = (1− β) · z(s+1)m + β · x̃s,
while at the beginning of the next inner loop,
x(s+1)m = (1− β) · z(s+1)m + β · x̃s+1,
which means that we need to set xk+1 ← xk+1 + β · (x̃s+1 − x̃s) (reassign the value of xk+1). We also give the reformulation of M-SGD (scheme (1)) to the Auslender & Teboulle (2006) scheme for reference:
xk = (1− β) · zk + β · yk, zk+1 = zk − α · ∇fik(xk), yk+1 = (1− β) · zk+1 + β · yk,( xk+1 = (1− β) · zk+1 + β · yk+1 ) .
Auslender & Teboulle (2006) (AC-SA (Lan, 2012))
α(1− β) = η Eliminate {zk} =========⇒
yk+1 = xk − η · ∇fik(xk), xk+1 = yk+1 + β · (yk+1 − yk).
Nesterov (1983; 2013b)
AC-SA (in the Euclidean case) maps to the Auslender & Teboulle (2006) scheme through (in the original notations) x = xmd z = x y = xag
1− β = β−1t α = γt
.
Intuition for the Auslender & Teboulle (2006) scheme can be found in Remark 2 in Lan (2012).
B.2 PROOFS OF THEOREM 1 AND THEOREM 2
The reformulated schemes are copied here for reference:
AM1-SGD (reformulated, proximal)
Initialize: x̃0 = z0 = x0, S = K/m. 1: for s = 0, . . . , S − 1 do 2: for j = 0, . . . ,m− 1 do 3: k = sm+ j. 4: xk = (1− βs) · zk + βs · x̃s. 5: zk+1 = proxαsh {zk − αs ·∇fik(xk)}. 6: (xk+1 = (1− βs) · zk+1 + βs · x̃s.) 7: end for 8: x̃s+1 = 1 m ∑m j=1 xsm+j .
9: end for Output: x̃S .
AM2-SGD (reformulated, proximal)
Initialize: z0 = φ0j = x0,∀j ∈ [m]. 1: for k = 0, . . . ,K − 1 do 2: Sample jk uniformly in [m]. 3: xjkk = (1− βk) · zk + βk · φkjk . 4: zk+1 = proxαkh {zk − αk ·∇fik(x jk k )}.
5: φk+1jk = (1− βk) · zk+1 + βk · φ k jk . 6: end for
Output: φ̄K = 1m ∑m j=1 φ K j .
Comparing the reformulated schemes, we see that their iterations can be generalized as follows:
x = (1− β) · z + β · y, z+ = proxαh{z − α · ∇fi(x)}, y+ = (1− β) · z+ + β · y.
(6)
This type of scheme is first proposed in Auslender & Teboulle (2006), which represents one of the simplest variants of the Nesterov’s methods (see Tseng (2008) for other variants). The scheme is then modified into various settings (Hu et al., 2009; Lan, 2012; Ghadimi & Lan, 2012; 2016; Zhou et al., 2019; Lan et al., 2019) to achieve acceleration. The following lemma serves as a cornerstone for the convergence proofs of AM1-SGD and AM2-SGD.
Lemma 1. If α(1− β) < 1/L, the update scheme (6) satisfies the following recursion:
1 1− β ( F (y+)− F (x?) ) ≤ β 1− β ( F (y)− F (x?) ) + 1 2α ( ‖z − x?‖2 − ∥∥z+ − x?∥∥2) + (‖∇f(x)−∇fi(x)‖+M)2
2(α−1 − L(1− β)) + 〈∇f(x)−∇fi(x), z − x?〉 .
B.2.1 PROOF OF LEMMA 1
This Lemma is similarly provided in Lan (2012); Ghadimi & Lan (2012) under a more general setting that allows non-Euclidean norms in the assumptions, we give a proof here for completeness.
Based on the convexity (Assumption (a)), we have
f(x)− f(x?) ≤ 〈∇f(x), x− z〉︸ ︷︷ ︸ R0 + 〈∇f(x)−∇fi(x), z − x?〉︸ ︷︷ ︸ R1
+ 〈 ∇fi(x), z − z+ 〉︸ ︷︷ ︸ R2
+ 〈 ∇fi(x), z+ − x? 〉︸ ︷︷ ︸ R3 . (7)
We upper bound the terms on the right side one-by-one.
For R0,
R0 (?) =
β 1− β 〈∇f(x), y − x〉 ≤ β 1− β ( f(y)− f(x) ) , (8)
where (?) uses the relation between x and z, i.e., (1− β) · (x− z) = β · (y − x). For R2, based on Assumption (a), we have
f(y+)− f(x) + 〈 ∇f(x), x− y+ 〉 ≤ L
2
∥∥x− y+∥∥2 +M ∥∥x− y+∥∥ .
Then, noting that x− y+ = (1− β) · (z − z+), we can arrange the above inequality as
R2 ≤ L(1− β)
2
∥∥z − z+∥∥2 + 1 1− β ( f(x)− f(y+) ) + 〈 ∇f(x)−∇fi(x), z+ − z 〉 +M
∥∥z − z+∥∥ ≤ L(1− β)
2
∥∥z − z+∥∥2 + 1 1− β ( f(x)− f(y+) ) + ( ‖∇f(x)−∇fi(x)‖+M ) ∥∥z − z+∥∥ . Using Young’s inequality with ζ > 0, we obtain
R2 ≤ L(1− β) + ζ
2
∥∥z − z+∥∥2 + 1 1− β ( f(x)− f(y+) ) + (‖∇f(x)−∇fi(x)‖+M)2 2ζ . (9)
For R3, based on the optimality condition of proxαh{z − α · ∇fi(x)} and denoting ∂h(z+) as a subgradient of h at z+, we have for any u ∈ X ,〈
α · ∂h(z+) + z+ − z + α · ∇fi(x), u− z+ 〉 ≥ 0,〈
∇fi(x), z+ − u 〉 ≤ 〈 ∂h(z+), u− z+ 〉 + 1
α
〈 z+ − z, u− z+ 〉 ≤ h(u)− h(z+) + 1
α
〈 z+ − z, u− z+ 〉 .
Choosing u = x?,
R3 ≤ h(x?)− h(z+) + 1
α
〈 z+ − z, x? − z+ 〉 (?) = h(x?)− h(z+) + 1
2α
( ‖z − x?‖2 − ∥∥z+ − x?∥∥2 − ∥∥z+ − z∥∥2) , (10) where (?) follows from ‖a+ b‖2 = ‖a‖2 + ‖b‖2 + 2 〈a, b〉. Finally, by upper bounding (7) using (8), (9), (10), we conclude that
f(x)− f(x?) ≤ R1 + β 1− β ( f(y)− f(x) ) + L(1− β) + ζ − α−1 2 ∥∥z − z+∥∥2 + 1
1− β ( f(x)− f(y+) ) + h(x?)− h(z+) + (‖∇f(x)−∇fi(x)‖+M) 2 2ζ
+ 1
2α
( ‖z − x?‖2 − ∥∥z+ − x?∥∥2) , After simplification,
1 1− β ( f(y+)− f(x?) ) ≤ β 1− β ( f(y)− f(x?) ) + L(1− β) + ζ − α−1 2 ∥∥z − z+∥∥2 + h(x?)− h(z+) + (‖∇f(x)−∇fi(x)‖+M) 2
2ζ +R1
+ 1
2α
( ‖z − x?‖2 − ∥∥z+ − x?∥∥2) . (11)
Note that with the convexity of h and y+ = (1− β) · z+ + β · y, we have h(y+) ≤ (1− β)h(z+) + βh(y),
h(z+) ≥ 1 1− β h(y+)− β 1− β h(y).
Using the above inequality and choosing ζ = α−1 − L(1 − β) > 0 ⇒ α(1 − β) < 1/L, we can arrange (11) as
1 1− β ( F (y+)− F (x?) ) ≤ β 1− β ( F (y)− F (x?) ) + 1 2α ( ‖z − x?‖2 − ∥∥z+ − x?∥∥2) + (‖∇f(x)−∇fi(x)‖+M)2
2(α−1 − L(1− β)) +R1.
B.2.2 PROOF OF THEOREM 1A
Using Assumption (c), Lemma 1 with x = xk z = zk z+ = zk+1 y = x̃s y+ = xk+1 α = αs β = βs , (12)
and taking expectation, if αs(1− βs) < 1/L, we have 1 1− βs ( Eik [F (xk+1)]− F (x?) ) + 1 2αs Eik [ ‖zk+1 − x?‖2 ] ≤ βs
1− βs ( F (x̃s)− F (x?) ) + 1 2αs ‖zk − x?‖2 +
(σ +M)2
2(α−1s − L(1− βs)) .
Summing the above inequality from k = sm, . . . , sm+m− 1, we obtain
1
(1− βs)m m∑ j=1 ( E [F (xsm+j)]− F (x?) ) + 1 2αsm E [∥∥z(s+1)m − x?∥∥2]
≤ βs 1− βs
( F (x̃s)− F (x?) ) + 1
2αsm ‖zsm − x?‖2 +
(σ +M)2
2(α−1s − L(1− βs)) ,
Using the definition of x̃s+1 and convexity,
αs 1− βs
( E [F (x̃s+1)]− F (x?) ) + 1 2m E [∥∥z(s+1)m − x?∥∥2]
≤ αsβs 1− βs
( F (x̃s)− F (x?) ) + 1
2m ‖zsm − x?‖2 +
αs(σ 2 +M2)
α−1s − L(1− βs) .
(13)
It can be verified that with the choices βs = ss+2 and αs = λ1 L(1−βs) , the following holds for s ≥ 0,
αs+1βs+1 1− βs+1 ≤ αs 1− βs and β0 = 0. (14)
Note that since our analysis aims at providing intuition, we do not refine the choice of αs as in (Hu et al., 2009; Ghadimi & Lan, 2012). Thus, by telescoping (13) from s = S − 1, . . . , 0, we obtain
αS−1 1− βS−1
( E [F (x̃S)]− F (x?) ) + 1 2m E [ ‖zSm − x?‖2 ] ≤ 1
2m ‖x0 − x?‖2 + S−1∑ s=0 αs(σ 2 +M2) α−1s − L(1− βs) ,
and thus,
E [F (x̃S)]− F (x?) ≤ 2L
λ1m(S + 1)2 ‖x0 − x?‖2 +
4L(σ2 +M2)
λ1(S + 1)2 S−1∑ s=0 α2s 1− αs(1− βs)L
(a) ≤ 2L λ1m(S + 1)2 ‖x0 − x?‖2 + 3λ1(σ 2 +M2) L(S + 1)2 S−1∑ s=0 (s+ 2)2
(b) ≤ 2L λ1m(S + 1)2 ‖x0 − x?‖2 + 8λ1(σ 2 +M2)(S + 1) L ,
where (a) follows from λ1 ≤ 23 and (b) holds because 0 ≤ x 7→ (x+2) 2 is non-decreasing and thus
S−1∑ s=0 (s+ 2)2 ≤ ∫ S 0 (x+ 2)2dx ≤ (S + 2) 3 3 ≤ 8(S + 1) 3 3 .
Denoting
λ?1 , L ‖x0 − x?‖
2 √ m √ σ2 +M2(S + 1) 3 2 ,
and based on the choice of λ1 = min { 2 3 , λ ∗ 1 } , if λ∗1 ≤ 23 , we have
E [F (x̃S)]− F (x?) ≤ 8 ‖x0 − x?‖
√ σ2 +M2
m 1 2 (S + 1) 1 2
.
If λ∗1 > 2 3 ,
E [F (x̃S)]− F (x?) ≤ 3L ‖x0 − x?‖2
m(S + 1)2 +
4 ‖x0 − x?‖ √ σ2 +M2
m 1 2 (S + 1) 1 2
.
Thus, we conclude that
E [F (x̃S)]− F (x?) ≤ 3L ‖x0 − x?‖2
m(S + 1)2 +
8 ‖x0 − x?‖ √ σ2 +M2
m 1 2 (S + 1) 1 2
.
Substituting S = K/m completes the proof.
B.2.3 PROOF OF THEOREM 1B
In order to prove Theorem 1b, we need the following known result for the martingale difference (cf. Lemma 2 in Lan et al. (2012)):
Lemma 2. With N > 0, let ξ0, ξ1, . . . , ξN−1 be a sequence of i.i.d. random variables, for t = 0, . . . , N − 1, σt > 0 be a deterministic number and ψt = ψt(ξ0, . . . , ξt) be a deterministic measurable function such that Eξt [ψt] = 0 a.s. and Eξt [ exp{ψ2t /σ2t } ] ≤ exp{1} a.s.. Then for any Λ ≥ 0,
Prob N−1∑ t=0 ψt ≥ Λ √√√√N−1∑ t=0 σ2t ≤ exp{−Λ2/3}. To start with, using Lemma 1 with the parameter mapping (12), we have
1 1− βs ( F (xk+1)− F (x?) ) + 1 2αs ‖zk+1 − x?‖2
≤ βs 1− βs
( F (x̃s)− F (x?) ) + 1
2αs ‖zk − x?‖2
+ (‖∇f(xk)−∇fik(xk)‖+M)2
2(α−1s − L(1− βs)) + 〈∇f(xk)−∇fik(xk), zk − x?〉
≤ βs 1− βs
( F (x̃s)− F (x?) ) + 1
2αs ‖zk − x?‖2 +
M2
α−1s − L(1− βs)
+ ‖∇f(xk)−∇fik(xk)‖
2
α−1s − L(1− βs) + 〈∇f(xk)−∇fik(xk), zk − x?〉 .
Summing the above inequality from k = sm, . . . , sm+m− 1 and using the choice αs = λ1L(1−βs) with λ1 ≤ 23 , we obtain
αs 1− βs
( F (x̃s+1)− F (x?) ) + 1
2m ∥∥z(s+1)m − x?∥∥2 ≤ αsβs
1− βs ( F (x̃s)− F (x?) ) + 1 2m ‖zsm − x?‖2 + 3α2sM2
+ 3α2s m sm+m−1∑ k=sm ‖∇f(xk)−∇fik(xk)‖ 2 + αs m sm+m−1∑ k=sm 〈∇f(xk)−∇fik(xk), zk − x?〉.
With our parameter choices, the relations in (14) hold and thus we can telescope the above inequality from s = S − 1, . . . , 0,
αS−1 1− βS−1
( F (x̃S)− F (x?) ) ≤ 1
2m ‖x0 − x?‖2 + 3M2 S−1∑ s=0 α2s
+ 3
m K−1∑ k=0 α2bk/mc ‖∇f(xk)−∇fik(xk)‖ 2
︸ ︷︷ ︸ R4
+ 1
m K−1∑ k=0
αbk/mc 〈∇f(xk)−∇fik(xk), zk − x?〉︸ ︷︷ ︸ R5 .
(15)
Denoting V2k , ‖∇f(xk)−∇fik(xk)‖ 2, ᾱ = ∑K−1 k=0 α 2 bk/mc = m ∑S−1 s=0 α 2 s, for R4, by Jensen’s inequality, we have
E [ exp { 1
ᾱ K−1∑ k=0 α2bk/mcV 2 k/σ 2 }] ≤ 1 ᾱ K−1∑ k=0 α2bk/mcE [ exp { V2k/σ2 }] (?) ≤ exp{1},
where (?) uses the additional assumption Eik [ exp { V2k/σ2 }] ≤ exp{1}.
Then, based on Markov’s inequality, we have for any Λ ≥ 0,
Prob { exp { 1
ᾱ K−1∑ k=0 α2bk/mcV 2 k/σ 2
} ≥ exp{Λ + 1} } ≤ exp{−Λ},
Prob { R4 ≥ (Λ + 1)σ2m
S−1∑ s=0 α2s
} ≤ exp{−Λ}. (16)
For R5, since we have Eik [ αbk/mc 〈∇f(xk)−∇fik(xk), zk − x?〉 ] = 0 and
Eik
[ exp { α2bk/mc 〈∇f(xk)−∇fik(xk), zk − x ?〉2
α2bk/mcσ 2D2X
}] ≤ Eik [ exp { V2k/σ2 }] ≤ exp{1},
which is based on the “light tail” assumption, using Lemma 2, we obtain
Prob R5 ≥ ΛσDX √√√√m S−1∑
s=0
α2s ≤ exp{−Λ2/3}. (17) Combining (15), (16) and (17), based on the parameter setting (cf. (5)) and using the notation
K0(m) , 3Lm ‖x0 − x?‖2
(K +m)2 +
8 ‖x0 − x?‖ √ σ2 +M2√
K +m ,
R6 , 12Lσ2
λ1(S + 1)2 S−1∑ s=0 α2s + 4LσDX λ1(S + 1)2 √ m √√√√S−1∑ s=0 α2s,
we conclude that
Prob {F (x̃S)− F (x?) ≤ K0(m) + ΛR6} ≥ 1− (exp{−Λ2/3}+ exp{−Λ}).
For R6, using the choice of αs and λ1, we obtain
R6 ≤ 4 √ 6σDX
3 √ K +m
+ 8λ1σ
2(S + 1) L ≤ 4 √ 6σDX 3 √ K +m + 4σ2 ‖x0 − x?‖√ K +m √ σ2 +M2
≤ 4σ ( 3 ‖x0 − x?‖+ √ 6DX ) 3 √ K +m ,
which completes the proof.
B.2.4 PROOF OF THEOREM 2
Using Assumption (c), Lemma 1 with x = xjkk z = zk z+ = zk+1 y = φkjk y+ = φk+1jk α = αk β = βk ,
and taking expectation, if αk(1− βk) < 1/L, we have 1
1− βk Eik,jk
[ F (φk+1jk )− F (x ?) ] + 1
2αk Eik,jk
[ ‖zk+1 − x?‖2 ] ≤ βk
1− βk Ejk
[ F (φkjk)− F (x ?) ] + 1
2αk ‖zk − x?‖2 +
(σ +M)2
2(α−1k − L(1− βk)) .
(18)
Note that
Eik,jk [ F (φk+1jk )− F (x ?) ]
= Eik,jk m∑ j=1 ( F (φk+1j )− F (x ?) ) − m∑ j 6=jk ( F (φkj )− F (x?) ) = Eik,jk m∑ j=1 ( F (φk+1j )− F (x ?) )− Ejk m∑ j 6=jk ( F (φkj )− F (x?)
) . Dividing both sides of (18) by m and then adding 1(1−βk)mEjk [∑m j 6=jk ( F (φkj )− F (x?) )] to both sides, we obtain
1
1− βk Eik,jk 1 m m∑ j=1 F (φk+1j )− F (x ?) + 1 2αkm Eik,jk [ ‖zk+1 − x?‖2 ]
≤ − 1 m Ejk [ F (φkjk)− F (x ?) ] + 1 1− βk 1 m m∑ j=1 F (φkj )− F (x?) + 1 2αkm ‖zk − x?‖2
+ (σ +M)2
2m(α−1k − L(1− βk))
= 1− 1−βkm 1− βk 1 m m∑ j=1 F (φkj )− F (x?) + 1 2αkm ‖zk − x?‖2 + (σ +M)2 2m(α−1k − L(1− βk)) . (19)
It can be verified that with our parameters choice: βk = k/m k/m+2 and αk = λ2 L(1−βk) , the following holds for k ≥ 0,
αk+1 1− 1−βk+1m 1− βk+1 ≤ αk 1− βk and β0 = 0.
Note that since our analysis aims at providing intuition, we do not refine the choice of αs as in (Hu et al., 2009; Ghadimi & Lan, 2012). Then, we can telescope (19) from k = K − 1, . . . , 0, which results in
αK−1 1− βK−1 E 1 m m∑ j=1 F (φKj )− F (x?) + 1 2m E [ ‖zK − x?‖2 ]
≤ λ2(m− 1) Lm
( F (x0)− F (x?) ) + 1
2m ‖x0 − x?‖2 + K−1∑ k=0
αk(σ +M) 2
2m(α−1k − L(1− βk)) .
Using the definition of φ̄K and convexity, we obtain
E [ F (φ̄K)− F (x?) ] ≤ 1− βK−1
αK−1
( λ2(m− 1)
Lm
( F (x0)− F (x?) ) + 1
2m ‖x0 − x?‖2 ) +
1− βK−1 αK−1 K−1∑ k=0
αk(σ +M) 2
2m(α−1k − L(1− βk))
(a) =
4(m− 1) ( F (x0)− F (x?) ) m ( K−1 m + 2 )2 + 2L ‖x0 − x?‖2 λ2m ( K−1 m + 2
)2 + 3λ2(σ +M) 2
2Lm ( K−1 m + 2 )2 K−1∑ k=0 ( k m + 2 )2 (b)
≤ 4(m− 1)
( F (x0)− F (x?) ) m ( K−1 m + 2 )2 + 2L ‖x0 − x?‖2 λ2m ( K−1 m + 2
)2 (20) + 4λ2(σ +M) 2 ( K−1 m + 2 ) L ,
where (a) uses λ2 ≤ 23 , (b) follows from simple integration arguments and that K m + 2 ≤ 2 ( K−1 m + 2 ) since K ≥ 1,m ≥ 1.
Based on the choice of
λ2 = min 23 , L ‖x0 − x?‖√2m(σ +M) (K−1m + 2) 32 ,
(20) can be further upper bounded as E [ F (φ̄K)− F (x?) ] ≤ 4(m− 1) ( F (x0)− F (x?) ) m ( K−1 m + 2 )2 + 3L ‖x0 − x?‖2 m ( K−1 m + 2 )2 +4√2 ‖x0 − x?‖ (σ +M) m 1 2 ( K−1 m + 2 ) 1 2 .
B.3 CONNECTIONS BETWEEN AM1-SGD AND KATYUSHA
The discussion in this section aims to shed light on the understanding of the experimental results, which also shows some interesting relations between AM1-SGD and Katyusha.
The high level idea of Katyusha momentum is that it works as a “magnet” inside an epoch of SVRG updates, which “stabilizes” the iterates so as to make Nesterov’s momentum effective (Allen-Zhu, 2018). In theory, the key effect of Katyusha momentum is that it allows the tightest possible variance bound for the stochastic gradient estimator of SVRG (cf. Lemma 2.4 and its comments in AllenZhu (2018)). In this sense, we can interpret Katyusha momentum as a variance reducer that further reduces the variance of SVRG. Below we show the similarity between the construction of Katyusha and AM1-SGD, based on which we conjecture that the amortized momentum can also reduce the variance of SGD (and thus increase the robustness). However, in theory, following a similar analysis of Katyusha, we cannot guarantee a reduction of σ in the worst case.
Deriving AM1-SGD from Katyusha Katyusha has the following scheme (non-proximal, in the original notations, σ is the strong convexity parameter, cf. Algorithm 1 with Option I in Allen-Zhu (2018))12:
Initialize: x̃0 = y0 = z0 = x0, η = 13L , ω = 1 + ασ. 1: for s = 0, . . . , S − 1 do 2: Compute and store∇f(x̃s). 3: for j = 0, . . . ,m− 1 do 4: k = sm+ j. 5: xk = τ1 · zk + τ2 · x̃s + (1− τ1 − τ2) · yk. 6: ∇̃k = ∇fik(xk)−∇fik(x̃s) | 1. What are the main contributions and novelties of the paper regarding Nesterov's Acceleration and mini-batch stochastic gradients?
2. How does the proposed amortized momentum method address the robustness issues faced by Nesterov's Acceleration in deep model training?
3. What are the strengths and weaknesses of the paper's theoretical analysis, particularly in comparison to existing literature on accelerated gradient methods?
4. How do the paper's experimental results support or not support the effectiveness and practicality of the proposed algorithms AM-SGD1 and AM-SGD2?
5. What are some concerns or limitations regarding the paper's assumptions and setup for analyzing stochastic gradient descent and its applications in deep learning? | Review | Review
%% Post Author Response comments %%
Thank you for your detailed response/revision.
1 - Introducing “m-times” larger momentum: Somehow, this is not a particularly intuitive statement or one that reflects clearly in a theoretical bound. Since we are getting to issues surrounding the use of momentum with stochastic optimization, I would like to make a note that the performance of these algorithms more broadly aren't quite sketched out for their use in broader stochastic optimization. In particular, despite broad use in practice, it is unclear if standard variants of Nesterov acceleration/Heavy Ball method achieve "acceleration" in stochastic optimization. See for e.g., the work of Kidambi et al ICLR 2018 (“On the insufficiency of existing momentum schemes for stochastic optimization”) - where, the argument was that these methods were designed for deterministic optimization (where we get exact gradients) - in fact, that paper empirically as well as theoretically shows that these schemes do not offer acceleration in a precise sense compared to specialized algorithms for stochastic optimization. It is unclear if the proposed algorithms can offer a similar improvement over SGD in a provable sense, even for the specific examples described in their paper.
2 - The point about theory (just as you mention) is that it doesn’t directly apply towards the simulations, nor, do they improve on already known algorithms - so I am unable to see the point that these results present broader implications that can guide practice.
3 - The response doesn’t address the fact that for the theory bounds presented in the paper to hold (even in the convex settings described), one requires knowledge of parameters that are not known a-priori, and are often fairly difficult to estimate. So the performance of the algorithm in practice may quite significantly be away from the bounds described in the paper.
While I appreciate the points and revision made by the authors, I still believe the paper requires some rethinking to present their results (and this includes more detailed comparisons to existing works) in order to make a case towards broader practical applications.
%%. %%
This paper considers robustness issues faced by Nesterov’s Acceleration used with mini-batch stochastic gradients for training Deep Models. In particular, the paper proposes amortized momentum, an algorithm that offers a way to handle these issues. The paper in general is well written and easy to follow.
The paper proposes algorithms AM-SGD1 and AM-SGD2 and presents extensive results regarding their complexity analysis on convex problems and their performance when training neural networks. The algorithms require storing one more model’s worth of storage compared to standard momentum based methods (which can be viewed as a drawback in certain cases).
Comments:
[1] I am concerned about the motivation behind this paper - which, according to the paper is that Nesterov’s accelerated gradient method with stochastic gradients has huge initial fluctuations. The issue with regards to more fluctuations of the initial performance is natural given how aggressive these accelerated methods work. As long as this is not a reason/cause for worse terminal performance (which doesn’t seem to be the case), I am unable to see why large initial fluctuations are concerning.
[2] Theory: The theory bounds for this problem setting do not appear to improve over known bounds in the literature. As a side note, the work of Hu et al. “Accelerated Gradient Methods for Stochastic Optimization and Online Learning” is highly related to this paper’s theoretical aspects, setup and bounds. Furthermore, this bounded variance noise model for stochastic gradients, while being theoretically useful (and important), is often very detached from practice (as this implies that the domain is bounded and we perform projections of iterates whenever they go outside the set - such aspects hardly reflect on practical SGD implementations). Using this as a means to reason about robustness of the proposed algorithm (for e.g. remarks for theorem 1a. and in conclusions) appears to be a big leap that may lead to potentially misleading conclusions.
[3] In order to run the algorithm to achieve the theoretical bounds claimed (in theorems 1 and 2), it appears that the stepsize \alpha_s depends on unknown quantities such as initial distance to opt, noise variance etc.
[4] The claim in page 2 about comparing SGD and M-SGD says that the stepsize in deterministic and stochastic optimization is constrained to be O(1/L) is rather misleading. In realistic practical implementation of SGD with a multiplicative noise oracle, one really has to use a much smaller stepsize than 1/L. This in a sense leads back to point[2] about the unrealistic nature of bounded variance assumptions for understanding SGD based methods used in the context of Machine Learning. They are better suited for understanding stochastic methods in black-box optimization (as opposed to considering Machine Learning problems).
My take is that even if the authors justify novelty in terms of theory results (which, to my knowledge is limited compared to existing literature), rewriting the paper by considering its theoretical merit and presenting empirical results (even as considered in this paper) of this algorithm (without attempting to make very strong connections to explain issues experienced in non-convex training of neural networks, since the theory works in vastly different settings under restrictive assumptions) can be appreciated by appropriate sections of audience (both in theory as well as optimization for deep learning communities). |
ICLR | Title
Amortized Nesterov's Momentum: Robust and Lightweight Momentum for Deep Learning
Abstract
Stochastic Gradient Descent (SGD) with Nesterov’s momentum is a widely used optimizer in deep learning, which is observed to have excellent generalization performance. In this work, we propose Amortized Nesterov’s Momentum, which is a special variant of Nesterov’s momentum. Compared with Nesterov’s momentum, our new momentum has more robust iterates and higher efficiency. Our empirical results show that it achieves faster early convergence and comparable final generalization performance with little-to-no tuning. Just like Nesterov’s method, the new schemes are also proved optimal in general convex setting. Our analysis sheds light on the understanding of the new variant.
1 INTRODUCTION
In recent years, Gradient Descent (GD) (Cauchy, 1847) and its variants have been widely used to solve large scale machine learning problems. Among them, Stochastic Gradient Descent (SGD) (Robbins & Monro, 1951), which replaces gradient with an unbiased stochastic gradient estimator, is a popular choice of optimizer especially for neural network training which requires lower precision. Sutskever et al. (2013) found that using SGD with Nesterov’s momentum (Nesterov, 1983; 2013b), which was originally designed to accelerate deterministic convex optimization, achieves substantial speedups for training neural networks. This finding essentially turns SGD with Nesterov’s momentum into the benchmarking method of neural network design, especially for classification tasks (He et al., 2016b;a; Zagoruyko & Komodakis, 2016; Huang et al., 2017). It is observed that in these tasks, the momentum technique plays a key role in achieving good generalization performance.
Adaptive methods (Duchi et al., 2011; Kingma & Ba, 2015; Tieleman & Hinton, 2012; Reddi et al., 2018), which are also becoming increasingly popular in the deep learning community, diagonally scale the gradient to speed up training. However, Wilson et al. (2017) show that these methods always generalize poorly compared with SGD with momentum (both classical momentum (Polyak, 1964) and Nesterov’s momentum).
In this work, we introduce Amortized Nesterov’s Momentum, which is a special variant of Nesterov’s momentum. From users’ perspective, the new momentum has only one additional integer hyper-parameter m to choose, which we call the amortization length. Learning rate and momentum parameter of this variant are strictly aligned with Nesterov’s momentum and by choosing m = 1, it recovers Nesterov’s momentum. This paper conducts an extensive study based on both empirical evaluation and convex analysis to identify the benefits of the new variant (or from users’ angle, to set m apart from 1). We list the advantages of Amortized Nesterov’s Momentum as follows:
• Increasing m improves robustness1. This is an interesting property since the new momentum not only provides acceleration, but also enhances the robustness. We provide an understanding of this property by analyzing the relation between convergence rate andm in the convex setting. • Increasing m reduces (amortized) iteration complexity. • A suitably chosen m boosts the convergence rate in the early stage of training and produces
comparable final generalization performance.
1In this work, robustness refers to the probability of an optimizer significantly deviating from its expected performance, which can be reflected by the deviations of accuracy or loss in the training process over multiple runs that start with the same initial guess.
• It is easy to tune m. The performances of the methods are stable for a wide range of m and we prove that the methods converge for any valid choice of m in the convex setting. • Ifm is not too large, the methods obtain the optimal convergence rate in general convex setting,
just like Nesterov’s method.
The new variant does have some minor drawbacks: it requires one more memory buffer, which is acceptable in most cases, and it shows some undesired behaviors when working with learning rate schedulers, which can be addressed by a small modification. Considering these pros and cons, we believe that the proposed variant can benefit many large-scale deep learning tasks.
Our high level idea is simple: the stochastic Nesterov’s momentum can be unreliable since it is provided only by the previous stochastic iterate. The iterate potentially has large variance, which may lead to a false momentum that perturbs the training process. We thus propose to use the stochastic Nesterov’s momentum based on several past iterates, which provides robust acceleration. In other words, instead of immediately using an iterate to provide momentum, we put the iterate into an “amortization plan” and use it later.
2 PRELIMINARIES: SGD AND NESTEROV’S MOMENTUM
We start with a review of SGD and Nesterov’s momentum. We discuss some subtleties in the implementation and evaluation, which contributes to the interpretation of our methods.
Notations In this paper, we use x ∈ Rd to denote the vector of model parameters. ‖·‖ and 〈·, ·〉 denote the standard Euclidean norm and inner product, respectively. Scalar multiplication for v ∈ Rd and β ∈ R is denoted as β ·v. f : Rd → R denotes the loss function to be minimized and∇f(x) represents the gradient of f evaluated at x. We denote the unbiased stochastic gradient estimator of ∇f(x) as ∇fi(x) with the random variable i independent of x (e.g., using mini-batch). We use x0 ∈ Rd to denote the initial guess.
SGD SGD has the following simple iterative scheme, where γ ∈ R denotes the learning rate:
xk+1 = xk − γ · ∇fik(xk), for k ≥ 0.
Nesterov’s momentum The original Nesterov’s accelerated gradient (with constant step) (Nesterov, 1983; 2013b) has the following scheme2 (y ∈ Rd, η, β ∈ R and y0 = x0):
yk+1 = xk − η · ∇f(xk), xk+1 = yk+1 + β · (yk+1 − yk), for k ≥ 0,
(1)
where we call β · (yk+1 − yk) the momentum. By simply replacing ∇f(xk) with ∇fik(xk), we obtain the SGD with Nesterov’s momentum, which is widely used in deep learning. To make this point clear, recall that the reformulation in Sutskever et al. (2013) (scheme (2), also the Tensorflow (Abadi et al., 2016) version) and the PyTorch (Paszke et al., 2017) version (scheme (3)) have the following schemes (v, vpt ∈ Rd and v0 = vpt0 = 0): for k ≥ 0,
(2) { vk+1 = β · vk − η · ∇fik(yk + β · vk), yk+1 = yk + vk+1.
(3) { vptk+1 = β · v pt k +∇fik(xk),
xk+1 = xk − η · (β · vptk+1 +∇fik(xk)).
Here the notations are modified based on their equivalence to scheme (1). It can be verified that schemes (2) and (3) are equivalent to (1) through vk = β−1 ·(xk−yk) and vptk = η−1β−1 ·(yk−xk), respectively (see Defazio (2018) for other equivalent forms of scheme (1)).
Interestingly, both PyTorch and Tensorflow3 track the values {xk}, which we refer to as M-SGD. This choice allows a consistent implementation when wrapped in a generic optimization layer (Defazio, 2018). However, the accelerated convergence rate (in the convex case) is built upon {yk} (Nesterov, 2013b) and {xk} may not possess such a theoretical improvement. We use OM-SGD to refer to the Original M-SGD that outputs {yk}.
2We exchange the notations of x and y in Nesterov (2013b). 3Tensorflow tracks the values {yk + β · vk} = {xk}.
SGD and M-SGD In order to study the features of momentum, in this work, we regard momentum as an add-on to plain SGD, which corresponds to fixing the learning rates4 γ = η. From the interpretation in Allen-Zhu & Orecchia (2017), η represents the learning rate for the gradient descent “inside” Nesterov’s method. To introduce the evaluation metrics of this paper, we report the results of training ResNet34 (He et al., 2016b) on CIFAR-10 (Krizhevsky et al., 2009) (our basic case study) using SGD and M-SGD in Figure 1. In this paper, all the multiple runs start with the same initial guess x0. Figure 1a shows that Nesterov’s momentum hurts the convergence in the first 60 epochs but accelerates the final convergence, which verifies the importance of momentum for achieving high accuracy. Figure 1b depicts the robustness of M-SGD and SGD, which suggests that adding Nesterov’s momentum slightly increases the uncertainty in the training process of SGD.
Train-batch loss vs. Full-batch loss In Figure 1c, train-batch loss stands for the average of batch losses forwarded in an epoch, which is commonly used to indicate the training process in deep learning. Full-batch loss is the average loss over the entire training dataset evaluated at the end of each epoch. In terms of optimizer evaluation, full-batch loss is much more informative than trainbatch loss as it reveals the robustness of an optimizer. However, full-batch loss is too expensive to evaluate and thus we only measure it on small datasets. On the other hand, test accuracy couples optimization and generalization, but since it is also evaluated at the end of the epoch, its convergence is similar to full-batch loss. Considering the basic usage of momentum in deep learning, we mainly use test accuracy to evaluate optimizers. We provide more discussion on this issue in Appendix C.2.
M-SGD vs. OM-SGD We also include OM-SGD in Figure 1a. In comparison, the final accuracies of M-SGD and OM-SGD are 94.606%± 0.152% and 94.728%± 0.111% with average deviations at 1.040% and 0.634%, respectively. This difference can be explained following the interpretation in Hinton (2012) that {xk} are the points after “jump” and {yk} are the points after “correction”.
3 AMORTIZED NESTEROV’S MOMENTUM
In this section, we formally introduce SGD with Amortized Nesterov’s Momentum (AM1-SGD) in Algorithm 1 with the following remarks:
Options It can be verified that if m = 1, AM1-SGD with Option I degenerates to M-SGD and Option II corresponds to OM-SGD. Just like the case for M-SGD and OM-SGD, the accelerated convergence rate is built upon Option II while Option I is easier to be implemented in a generic optimization layer5. Intuitively, Option I is SGD with amortized momentum and Option II applies an m-iterations tail averaging on Option I.
4Ma & Yarats (2019) observed that when effective learning rates γ = η(1 − β)−1 are fixed, M-SGD and SGD have similar performance. We provide a discussion on this observation in Appendix C.1.
5To implement Option II, we can either maintain another identical network for the shifted point x̃ or temporarily change the network parameters in the evaluation phase.
Algorithm 1 AM1-SGD Input: Initial guess x0, learning rate η, momentum β, amortization length m, iteration number K. Initialize: x← x0, x̃← x0, x̃+ ← 0 {a running average}.
1: for k = 0, . . . ,K − 1 do 2: x← x− η · ∇fik(x). 3: x̃+ ← x̃+ + 1m · x. 4: if (k + 1) mod m = 0 then 5: x← x+ β · (x̃+ − x̃). {adding amortized momentum} 6: x̃← x̃+, x̃+ ← 0. 7: end if 8: end for
Output: Option I: x, Option II: x̃. * The symbol ‘←’ denotes assignment.
Efficiency We can improve the efficiency of Algorithm 1 by maintaining a running scaled momentum ṽ+ , m · (x̃+ − x̃) instead of the running average x̃+, by replacing the following steps in Algorithm 1:
Initialize: x← x0, x̃← x0, ṽ+ ← −m · x0, Step 3: ṽ+ ← ṽ+ + x. Step 5: x← x+ (β/m) · ṽ+. Step 6: x̃← x̃+ (1/m) · ṽ+, ṽ+ ← −m · x̃.
Then, in one m-iterations loop, for each of the first m − 1 iterations, AM1-SGD requires 1 vector addition and 1 scaled vector addition. At the m-th iteration, it requires 1 vector addition, 1 scalarvector multiplication and 3 scaled vector additions. In comparison, M-SGD (standard PyTorch) requires 1 vector addition, 1 (in-place) scalar-vector multiplication and 2 scaled vector additions per iteration. Thus, as long as m > 2, AM1-SGD has lower amortized cost than M-SGD. For memory complexity, AM1-SGD requires one more auxiliary buffer than M-SGD.
Tuning m We did a parameter sweep for m in our basic case study. We plot the final and the average deviation of test accuracies over 5 runs againstm in Figure 2a. Note that m=1 corresponds to the results of M-SGD and OM-SGD, which are already given in Figure 1. From this empirical result, m introduces a trade-off between final accuracy and robustness (the convergence behaviors can be found in Appendix A.1). Figure 2a suggests that m= 5 is a good choice for this task. For simplicity, and also as a recommended setting, we fix m=5 for the rest of experiments in this paper.
A momentum that increases robustness To provide a stronger justification, we ran 20 seeds with m = 5 in Figure 2b and the detailed data are given in Figure 3 & Table 1. The results show that the amortized momentum significantly increases the robustness. Intuitively, the gap between Option I and Option II can be understood as the effect of tail averaging. However, the large gap between Option I and SGD is somewhat mysterious: what Option I does is to inject a very large momentum6 into SGD every m iterations. It turns out that this momentum not only provides acceleration, but also helps the algorithm become more robust than SGD. This observation basically differentiates AM1-SGD from a simple interpolation in-between M-SGD and SGD.
6Amortized momentum β ·(x̃+−x̃) is expected to be much large than Nesterov’s momentum β ·(yk+1−yk).
Learning rate scheduler issue We observed that when we use schedulers with a large decay factor and the momentum β is too large for the task (e.g., 0.995 for the task of this section), there would be a performance drop after the learning rate reduction. We believe that it is caused by the different cardinalities of iterates being averaged in x̃+, which leads to a false momentum. This issue is resolved by restarting the algorithm after each learning rate reduction inspired by (O’donoghue & Candes, 2015). We include more discussion and evidence in Appendix A.4.
3.1 AM2-SGD: A VARIANT WITH IDENTICAL ITERATIONS
Algorithm 2 AM2-SGD
Input: Initial guess x0, amortization lengthm, a point table φ = [φ1 · · · φm] ∈ Rd×m, learning rate η, momentum β, iteration number K. Initialize: φ0j = x0,∀j ∈ [m]*. {jk | jk ∈ [m]} K−1 k=0 is a sequence of uniformly random indexes.
If Option II is used, φ̄0 = x0. {a running average for the point table φ} 1: for k = 0, . . . ,K − 1 do 2: φk+1jk = xk − η · ∇fik(xk) and keep other entries unchanged (i.e., φ k+1 j = φ k j for j 6= jk). 3: xk+1 = φ k+1 jk + β · (φk+1jk+1 − φ k jk
). {adding amortized momentum} 4: if Option II then φ̄k+1 = φ̄k + 1m · ( φk+1jk − φ k jk ) .
5: end for Output: Option I (not recommended): xK , Option II: φ̄K . * [m] denotes the set {1, . . . ,m}.
While enjoying an improved efficiency, AM1-SGD does not have identical iterations7, which to some extent limits its extensibility to other settings (e.g., asynchronous setting). In this section, we propose a variant of Amortized Nesterov’s Momentum (AM2-SGD, Algorithm 2) to address this problem. To show the characteristics of AM2-SGD, we make the following remarks:
Trading memory for extensibility In expectation, the point table φ stores the most recent m iterations and thus the output φ̄K is an m-iterations tail average, which connects to AM1-SGD. The relation between AM1-SGD and AM2-SGD resembles that of SVRG (Johnson & Zhang, 2013) and SAGA (Defazio et al., 2014), the most popular methods in finite-sum convex optimization: to reuse the information from several past iterates, we can either maintain a “snapshot” that aggregates the information or keep the iterates in a table. A side-by-side comparison is given in Section 4.
Options and convergence As in the case of AM1-SGD, if m = 1, AM2-SGD with Option I corresponds to M-SGD and Option II is OM-SGD. In our preliminary experiments, the convergence of AM2-SGD is similar to AM1-SGD and it also has the learning rate scheduler issue. In our preliminary experiments (can be found in Appendix A), we observed that Option I is consistently worse than Option II and it does not seem to benefit from increasingm. Thus, we do not recommend using Option I. We also set m = 5 for AM2-SGD for its evaluation due to the similarity.
7For AM1-SGD, the workload varies for different iteration k due to the if-clause at Step 4.
Additional randomness {jk} In our implementation, at each iteration, we sample an index in [m] as jk+1 and obtain the stored index jk. We observed that with Option I, AM2-SGD has much larger deviations than AM1-SGD, which we believe is caused by the additional random indexes {jk}.
4 CONVERGENCE RESULTS
The original Nesterov’s accelerated gradient is famous for its optimal convergence rates for solving convex problems. In this section, we analyze the convergence rates for AM1-SGD and AM2-SGD in the convex case, which explicitly model the effect of amortization (i.e., m). While these rates do not hold for deep learning problems in general, they help us understand the observed convergence behaviors of the proposed methods, especially on how they differ from M-SGD (m = 1). Moreover, the analysis also provides intuition on tuning m. Since the original Nesterov’s method is deterministic (Nesterov, 1983; 2013b), we follow the setting of its stochastic variants (Lan, 2012; Ghadimi & Lan, 2012), in which Nesterov’s acceleration also achieves the optimal rates.
We consider the following convex composite problem (Beck & Teboulle, 2009; Nesterov, 2013a):
min x∈X
{ F (x) , f(x) + h(x) } , (4)
whereX ⊆ Rd is a non-empty closed convex set and h is a proper convex function with its proximal operator proxαh(·)8 available. We impose the following assumptions on the regularity of f and the stochastic oracle∇fi (identical to the ones in Ghadimi & Lan (2012) with µ = 0): Assumptions. For some L ≥ 0,M ≥ 0, σ ≥ 0,
(a) 0 ≤ f(y)− f(x)− 〈∇f(x), y − x〉 ≤ L2 ‖y − x‖ 2 +M ‖y − x‖ ,∀x, y ∈ X.9
(b) Ei [∇fi(x)] = ∇f(x),∀x ∈ X. (c) Ei [ ‖∇fi(x)−∇f(x)‖2 ] ≤ σ2,∀x ∈ X.
The notation Eik [ · ] is E [ · | (i0, . . . , ik−1)] for a random process i0, i1, . . .. These assumptions cover several important classes of convex problems. For example, (a) covers the cases of f being L-smooth (M = 0) or L0-Lipschitz continuous (M = 2L0, L = 0) convex functions and if σ = 0 in (c), the assumptions cover several classes of deterministic convex programming problems. We denote x? ∈ X as a solution to problem (4) and x0 ∈ X as the initial guess. Unlike its usage in deep learning, the momentum parameter β is always a variable in general convex analysis. For the simplicity of analysis, we reformulate AM1-SGD (Algorithm 1) and AM2-SGD (Algorithm 2) into the following schemes10(z ∈ X,α ∈ R):
AM1-SGD (reformulated, proximal)
Initialize: x̃0 = z0 = x0, S = K/m. 1: for s = 0, . . . , S − 1 do 2: for j = 0, . . . ,m− 1 do 3: k = sm+ j. 4: xk = (1− βs) · zk + βs · x̃s. 5: zk+1 = proxαsh {zk − αs ·∇fik(xk)}. 6: (xk+1 = (1− βs) · zk+1 + βs · x̃s.) 7: end for 8: x̃s+1 = 1 m ∑m j=1 xsm+j .
9: end for Output: x̃S .
AM2-SGD (reformulated, proximal)
Initialize: z0 = φ0j = x0,∀j ∈ [m]. 1: for k = 0, . . . ,K − 1 do 2: Sample jk uniformly in [m]. 3: xjkk = (1− βk) · zk + βk · φkjk . 4: zk+1 = proxαkh {zk − αk ·∇fik(x jk k )}.
5: φk+1jk = (1− βk) · zk+1 + βk · φ k jk . 6: end for
Output: φ̄K = 1m ∑m j=1 φ K j .
We show in Appendix B.1 that when h ≡ 0 and β is a constant, the reformulated schemes AM1SGD and AM2-SGD are equivalent to Algorithm 1 and Algorithm 2 through αs = η(1−βs)−1 and
8∀x ∈ Rd, proxαh(x) , argminu∈X { 1 2 ‖u− x‖2 + αh(u) } , see Parikh et al. (2014).
9When M > 0, f is not necessarily differentiable and we keep using the notation ∇f(x) to denote an arbitrary subgradient of f at x for consistency.
10For simplicity, we assume K is divisible by m.
αk = η(1 − βk)−1. These reformulations are basically how Nesterov’s momentum was migrated into deep learning (Sutskever et al., 2013). Then we establish the convergence rates for AM1-SGD and AM2-SGD as follows. All the proofs in this paper are given in Appendix B.2. Theorem 1. For the reformulated AM1-SGD, suppose we choose
βs = s
s+ 2 and αs = λ1 L(1− βs)
with λ1 = min { 2
3 , L ‖x0 − x?‖ 2 √ m √ σ2 +M2(S + 1) 3 2
} . (5)
Then,
(a) The output x̃S satisfies
E [F (x̃S)]− F (x?) ≤ 3Lm ‖x0 − x?‖2
(K +m)2 +
8 ‖x0 − x?‖ √ σ2 +M2√
K +m , K0(m).
(b) If the variance has a “light tail”, i.e., Ei [ exp { ‖∇fi(x)−∇f(x)‖2/σ2 }] ≤exp{1},∀x ∈
X , and X is compact, denoting DX , maxx∈X ‖x− x?‖, for any Λ ≥ 0, we have
Prob { F (x̃S)− F (x?) ≤ K0(m) + 4Λσ ( 3 ‖x0 − x?‖+ √ 6DX ) 3 √ K +m } ≥ 1− ( exp{−Λ2/3}+ exp{−Λ} ) .
Remarks: (a) Regarding K0(m), its minimum is obtained at either m = 1 or m = K. Note that for AM1-SGD,m is strictly constrained in {1, . . . ,K}. It can be verified that whenm = K, AM1-SGD becomes the modified mirror descent SA (Lan, 2012), or under the Euclidean setting, the SGD that outputs the average of the whole history, which is rarely used in practice. In this case, the convergence rate in Theorem 1a becomes the corresponding O(L/K + (σ +M)/ √ K) (cf. Theorem 1 in Lan (2012)). Thus, we can regard AM1-SGD as a smooth transition between AC-SA and the modified mirror descent SA. (b) The additional compactness and “light tail” assumptions are similarly required in Nemirovski et al. (2009); Lan (2012); Ghadimi & Lan (2012). Recently, Juditsky et al. (2019) established similar bounds under weaker assumptions by truncating the gradient. However, as indicated by the authors, their technique cannot be used for accelerated algorithms due to the accumulation of bias.
Understandings: Theorem 1a gives the expected performance in terms of full-batch loss F (x̃) − F (x?), from which the trade-off of m is clear: Increasing m improves the dependence on variance σ but deteriorates the O(L/K2) term (i.e., the acceleration). Based on this trade-off, we can understand the empirical results in Figure 2b: the faster convergence in the early stage could be the result of a better control on σ and the slightly lowered final accuracy is possibly caused by the reduced acceleration effect. Theorem 1b provides the probability of the full-batch loss deviating from its expected performance (i.e., K0(m)). It is clear that increasing m leads to smaller deviations with the same probability, which sheds light on the understanding of the increased robustness observed in Figure 2. Since the theorem is built on the full-batch loss, we did an experiments based on this
metric in Figure 4 & Table 2. Here we choose training a smaller ResNet18 with pre-activation (He et al., 2016a) on CIFAR-10 as the case study (the test accuracy is reported in Appendix A.5).
For AM2-SGD, we only give the expected convergence results as follows. Theorem 2. For the reformulated AM2-SGD, if we choose
βk = k/m
k/m+ 2 and αk = λ2 L(1− βk) with λ2 = min 23 , L ‖x0 − x?‖√2m(σ +M) (K−1m + 2) 32 ,
the output φ̄K satisfies E [ F (φ̄K) ] − F (x?)≤ 4(m2−m) ( F (x0)−F (x?) ) +3Lm ‖x0−x?‖2
(K + 2m− 1)2 +
4 √
2 ‖x0−x?‖ (σ+M)√ K + 2m− 1 .
Remark: In comparison with Theorem 1a, Theorem 2 has an additional term F (x0)− F (x?) in the upper bound, which is inevitable. This difference comes from different restrictions on the choice of m. For AM2-SGD,m ≥ 1 is the only requirement. Since it is impossible to letm K to obtain an improved rate, this additional term is inevitable. As a sanity check, we can let m → ∞ to obtain a point table with almost all x0, and then the upper bound becomes exactly F (x0)− F (x?). In some cases, there exists an optimal choice of m > 1 in Theorem 2. However, the optimal choice could be messy and thus we omit the discussion here.
Understanding: Comparing the rates, we see that when using the same m, AM2-SGD has slightly better dependence on σ, which is related to the observation in Figure 5 that AM2-SGD is always slightly faster than AM1-SGD. This difference is suggesting that randomly incorporating past iterates beyond m iterations helps. If m = O(1), Theorems 1 and 2 establish the optimal O(L/K2 + (σ + M)/ √ K) rate in the convex setting (see Lan (2012) for optimality), which verifies AM1-SGD and AM2-SGD as variants of the Nesterov’s method (Nesterov, 1983; 2013b). From the above analysis, the effect of m can be understood as trading acceleration for variance control. However, since both acceleration and variance control boost the convergence speed, the reduced final performance observed in the CIFAR experiments may not always be the case as will be shown in Figure 5 and Table 3.
Connections with Katyusha Our original inspiration of AM1-SGD comes from the construction of Katyusha (Allen-Zhu, 2018), the recent breakthrough in finite-sum convex optimization, which uses a previously calculated “snapshot” point to provide momentum, i.e., Katyusha momentum. AM1-SGD also uses an aggregated point to provide momentum and it shares many structural similarities with Katyusha. We refer the interested readers to Appendix B.3.
5 PERFORMANCE EVALUATION
In this section, we evaluate AM1-SGD and AM2-SGD on more deep learning tasks. Our goal is to show their potentials of serving as alternatives for M-SGD. Regarding the options: for AM1-SGD, Option I is a nice choice, which has slightly better final performance as shown in Table 1; for AM2SGD, Option I is not recommended as mentioned before. Here we choose to evaluate Option II for both methods for consistency, which also corresponds to the analysis in Section 4. AM1-SGD and AM2-SGD use exactly the same values for (η, β) as M-SGD, which was tuned to optimize the performance of M-SGD. We set m = 5 for AM1-SGD and AM2-SGD.
We trained ResNet50 and ResNet152 (He et al., 2016b) on the ILSVRC2012 dataset (“ImageNet”) (Russakovsky et al., 2015) shown in Figure 5b. For this task, we used 0.1 initial learning rate and 0.9 momentum for all methods, which is a typical choice. We performed a restart after each learning rate reduction as discussed in Appendix A.4. We believe that this helps the training process and also does not incur any additional overhead. We report the final accuracy in Table 3.
We also did a language model experiment on Penn Treebank dataset (Marcus et al., 1993). We used the LSTM (Hochreiter & Schmidhuber, 1997) model defined in Merity et al. (2017) and followed the experimental setup in its released code. We only changed the learning rate and momentum in
the setup. The baseline is SGD+ASGD11 (Polyak & Juditsky, 1992) with constant learning rate 30 as used in Merity et al. (2017). For the choice of (η, β), following Lucas et al. (2019), we chose β = 0.99 and used the scheduler that reduces the learning rate by half when the validation loss has not decreased for 15 epochs. We swept η from {5, 2.5, 1, 0.1, 0.01} and found that η = 2.5 resulted in the lowest validation perplexity for M-SGD. We thus ran AM1-SGD and AM2-SGD with this (η, β) and m = 5. Due to the small decay factor, we did not restart AM1-SGD and AM2-SGD after learning rate reductions. The validation perplexity curve is plotted in Figure 5a. We report validation perplexity and test perplexity in Table 3. This experiment is directly comparable with the one in Lucas et al. (2019).
Extra results are provided in the appendices for interested readers: the robustness when using large β (Appendix A.2), a CIFAR-100 experiment (Appendix A.6) and comparison with classical momentum (Polyak, 1964), AggMo (Lucas et al., 2019) and QHM (Ma & Yarats, 2019) (Appendix A.3).
6 CONCLUSIONS
We presented Amortized Nesterov’s Momentum, which is a special variant of Nesterov’s momentum that utilizes several past iterates to provide the momentum. Based on this idea, we designed two different realizations, namely, AM1-SGD and AM2-SGD. Both of them are simple to implement with little-to-no additional tuning overhead over M-SGD. Our empirical results demonstrate that switching to AM1-SGD and AM2-SGD produces faster early convergence and comparable final generalization performance. AM1-SGD is lightweight and has more robust iterates than M-SGD, and thus can serve as a favorable alternative to M-SGD in large-scale deep learning tasks. AM2-SGD could be favorable for more restrictive tasks (e.g., asynchronous training) due to its extensibility and good performance. Both the methods are proved optimal in the convex case, just like M-SGD. Based on the intuition from convex analysis, the proposed methods are trading acceleration for variance control, which provides hints for the hyper-parameter tuning.
11SGD+ASGD is to run SGD and switch to averaged SGD (ASGD) when a threshold is met.
Appendices
A Extra Experimental Results 14
A.1 The effect of m on convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
A.2 Robustness on large momentum parameters . . . . . . . . . . . . . . . . . . . . . 15
A.3 Comparison with other momentum . . . . . . . . . . . . . . . . . . . . . . . . . . 15
A.4 Issues with learning rate schedulers . . . . . . . . . . . . . . . . . . . . . . . . . 17
A.5 Test accuracy results of Figure 4 & Table 2 . . . . . . . . . . . . . . . . . . . . . 17
A.6 CIFAR-100 experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
A.7 A sanity check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
B Missing parts in Section 4 19
B.1 The reformulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
B.2 Proofs of Theorem 1 and Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . 20
B.2.1 Proof of Lemma 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
B.2.2 Proof of Theorem 1a . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
B.2.3 Proof of Theorem 1b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
B.2.4 Proof of Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
B.3 Connections between AM1-SGD and Katyusha . . . . . . . . . . . . . . . . . . . 27
C Miscellanies 28
C.1 Comparison of SGD and M-SGD . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
C.2 Training evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
D Experimental Setup 29
D.1 Classification Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
D.2 Language Model Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
A EXTRA EXPERIMENTAL RESULTS
In this appendix, we provide more experimental results to further evaluate the Amortized Nesterov’s Momentum. Table 4 shows the detailed data of the parameter sweep experiments, where the convergence curves of these results are given in Appendix A.1. In Appendix A.2, we compare the robustness of AM1-SGD and M-SGD on large momentum parameters. In Appendix A.3, we empirically compare the Amortized Nesterov’s Momentum with classical momentum (Polyak, 1964), aggregated momentum (Lucas et al., 2019) and quasi-hyperbolic momentum (Ma & Yarats, 2019). We discuss the issues with learning rate schedulers in Appendix A.4. We report the test accuracy results of the ResNet18 experiment (in Section 4) in Appendix A.5. A CIFAR-100 experiment is provided in Appendix A.6. We also provide a sanity check for our implementation in Appendix A.7.
METHOD DESCRIPTION FINAL ACCURACY Avg. STD
A.1 THE EFFECT OF m ON CONVERGENCE
We show in Figure 6 how m affects the convergence of test accuracy. The results show that increasing m speeds up the convergence in the early stage. While for AM1-SGD the convergences of Option I and Option II are similar, AM2-SGD with Option II is consistently better than with Option I in this experiment. It seems that AM2-SGD with Option I does not benefit from increasing m and the algorithm is not robust. Thus, we do not recommend using Option I for AM2-SGD.
A.2 ROBUSTNESS ON LARGE MOMENTUM PARAMETERS
We compare the robustness of M-SGD and AM1-SGD when β is large in Figure 7 & Table 5. For fair comparison, AM1-SGD uses Option I. As we can see, the STD error of M-SGD scales up significantly when β is larger and the performance is more affected by a large β compared with AM1-SGD.
A.3 COMPARISON WITH OTHER MOMENTUM
In this section, we compare AM1-SGD (Option I) with classical momentum (Polyak, 1964), AggMo (Lucas et al., 2019) and QHM (Ma & Yarats, 2019) in our basic case study (training ResNet34 on
CIFAR-10). Since we are not aware of what makes a fair comparison with these methods (e.g., it is not clear what is the effective learning rate for AM1-SGD), we compare them based on the default hyper-parameter settings suggested by their papers.
Classical Momentum The SGD with classical momentum (CM-SGD) that is widely used in deep learning has the following scheme (standard PyTorch) (vcm ∈ Rd, vcm0 = 0):
vcmk+1 = β · vcmk +∇fik(xk), xk+1 = xk − η · vcmk+1, for k ≥ 0.
CM-SGD with its typical hyper-parameter settings (η0 = 0.1, β = 0.9) is observed to achieve similar generalization performance as M-SGD. However, CM-SGD is more unstable and prone to oscillations (Lucas et al., 2019), which makes it less robust than M-SGD as shown in Table 6.
Aggregated Momentum (AggMo) AggMo combines multiple momentum buffers, which is inspired by the passive damping from physics literature (Lucas et al., 2019). AggMo uses the following update rules (for t = 1, . . . , T , v(t) ∈ Rd, v(t)0 = 0):
v (t) k+1 = β (t) · v(t)k −∇fik(xk), for t = 1, . . . , T,
xk+1 = xk + η T · T∑ t=1 v (t) k+1, for k ≥ 0.
We used the exponential hyper-parameter setting recommended in the original work with the scalefactor a = 0.1 fixed, β(t) = 1− at−1, for t = 1, . . . , T and choosing T in {2, 3, 4}. We found that T = 2 gave the best performance in this experiment. As shown in Figure 8 & Table 6, with the help of passive damping, AggMo is more stable and robust compared with CM-SGD.
Quasi-hyperbolic Momentum (QHM) Ma & Yarats (2019) introduce the immediate discount factor ν ∈ R for the momentum scheme, which results in the QHM update rules (α ∈ R, vqh ∈ Rd, vqh0 = 0):
vqhk+1 = β · v qh k + (1− β) · ∇fik(xk),
xk+1 = xk − α · (ν · vqhk+1 + (1− ν) · ∇fik(xk)), for k ≥ 0. Here we used the recommended hyper-parameter setting for QHM (α0 = 1.0, β = 0.999, ν = 0.7).
Figure 8 shows that AM1-SGD, AggMo and QHM achieve faster convergence in the early stage while CM-SGD has the highest final accuracy. In terms of robustness, huge gaps are observed when comparing AM1-SGD with the remaining methods in Table 6. Note that AM1-SGD is more efficient than both QHM and AggMo, and is as efficient as CM-SGD.
We also plot the convergence of train-batch loss for all the methods in Figure 9. Despite of showing worse generalization performance, both QHM and AggMo perform better on reducing the trainbatch loss in this experiment, which is consistent with the results reported in Ma & Yarats (2019); Lucas et al. (2019).
A.4 ISSUES WITH LEARNING RATE SCHEDULERS
We show in Figure 10 that when β is large for the task, using step learning rate scheduler with decay factor 10, a performance drop is observed after each reduction. Both Option I and Option II have this issue and the curves are basically identical. Here we only use Option II. We fix this issue by performing a restart after each learning rate reduction (labeled with ‘+’). We plot the train-batch loss here because we find the phenomenon is clearer in this way. If β = 0.9, there is no observable performance drop in this experiment.
For smooth-changing schedulers such as the cosine annealing scheduler (Loshchilov & Hutter, 2016), the amortized momentum works well as shown in Figure 11.
A.5 TEST ACCURACY RESULTS OF FIGURE 4 & TABLE 2
We report the test accuracy results of the experiments in Section 4 in Figure 12 & Table 7. These results are reminiscent of the ResNet34 experiments (Figure 3 & Table 1).
A.6 CIFAR-100 EXPERIMENT
We report the results of training DenseNet121 (Huang et al., 2017) on CIFAR-100 in Figure 13, which shows that both AM1-SGD and AM2-SGD perform well before the final learning rate reduction. However, the final accuracies are lowered around 0.6% compared with M-SGD. We also notice that SGD reduces the train-batch loss at an incredibly fast rate and the losses it reaches are consistently lower than other methods in the entire 300 epochs. However, this performance is not
reflected in the convergence of test accuracy. We believe that this phenomenon suggests that the DenseNet model is actually “overfitting” M-SGD (since in the ResNet experiments, M-SGD always achieves a lower train loss than SGD after the final learning rate reduction).
A.7 A SANITY CHECK
When m = 1, both AM1-SGD and AM2-SGD are equivalent to M-SGD, we plot their convergence in Figure 14 as a sanity check (the detailed data is given in Table 4).
We observed that when m = 1, both AM1-SGD and AM2-SGD have a lower STD error than M-SGD. We believe that it is because they both maintain the iterates without scaling, which is numerically more stable than M-SGD (M-SGD in standard PyTorch maintains a scaled buffer, i.e., vptk = η −1β−1 · (yk − xk)).
B MISSING PARTS IN SECTION 4
B.1 THE REFORMULATIONS
When h ≡ 0 and β is a constant, we do the reformulations by eliminating the sequence {zk}. For the reformulated AM2-SGD,
xjkk = (1− β) · zk + β · φ k jk ,
zk+1 = zk − α · ∇fik(x jk k ),
φk+1jk = (1− β) · zk+1 + β · φ k jk ,(
x jk+1 k+1 = (1− β) · zk+1 + β · φ k+1 jk+1
) .
The reformulated AM2-SGD
α(1− β) = η Eliminate {zk} =========⇒
φk+1jk = x jk k − η · ∇fik(x jk k ),
x jk+1 k+1 = φ k+1 jk + β · ( φk+1jk+1 − φ k jk ) .
Algorithm 2
For the reformulated AM1-SGD, when h ≡ 0, the inner loops are basically SGD,
xk = (1− β) · zk + β · x̃s, zk+1 = zk − α · ∇fik(xk),
(xk+1 = (1− β) · zk+1 + β · x̃s.)
α(1− β) = η Eliminate {zk} =========⇒ xk+1 = xk − η · ∇fik(xk).
At the end of each inner loop (i.e., when (k + 1) mod m = 0), we have
x(s+1)m = (1− β) · z(s+1)m + β · x̃s,
while at the beginning of the next inner loop,
x(s+1)m = (1− β) · z(s+1)m + β · x̃s+1,
which means that we need to set xk+1 ← xk+1 + β · (x̃s+1 − x̃s) (reassign the value of xk+1). We also give the reformulation of M-SGD (scheme (1)) to the Auslender & Teboulle (2006) scheme for reference:
xk = (1− β) · zk + β · yk, zk+1 = zk − α · ∇fik(xk), yk+1 = (1− β) · zk+1 + β · yk,( xk+1 = (1− β) · zk+1 + β · yk+1 ) .
Auslender & Teboulle (2006) (AC-SA (Lan, 2012))
α(1− β) = η Eliminate {zk} =========⇒
yk+1 = xk − η · ∇fik(xk), xk+1 = yk+1 + β · (yk+1 − yk).
Nesterov (1983; 2013b)
AC-SA (in the Euclidean case) maps to the Auslender & Teboulle (2006) scheme through (in the original notations) x = xmd z = x y = xag
1− β = β−1t α = γt
.
Intuition for the Auslender & Teboulle (2006) scheme can be found in Remark 2 in Lan (2012).
B.2 PROOFS OF THEOREM 1 AND THEOREM 2
The reformulated schemes are copied here for reference:
AM1-SGD (reformulated, proximal)
Initialize: x̃0 = z0 = x0, S = K/m. 1: for s = 0, . . . , S − 1 do 2: for j = 0, . . . ,m− 1 do 3: k = sm+ j. 4: xk = (1− βs) · zk + βs · x̃s. 5: zk+1 = proxαsh {zk − αs ·∇fik(xk)}. 6: (xk+1 = (1− βs) · zk+1 + βs · x̃s.) 7: end for 8: x̃s+1 = 1 m ∑m j=1 xsm+j .
9: end for Output: x̃S .
AM2-SGD (reformulated, proximal)
Initialize: z0 = φ0j = x0,∀j ∈ [m]. 1: for k = 0, . . . ,K − 1 do 2: Sample jk uniformly in [m]. 3: xjkk = (1− βk) · zk + βk · φkjk . 4: zk+1 = proxαkh {zk − αk ·∇fik(x jk k )}.
5: φk+1jk = (1− βk) · zk+1 + βk · φ k jk . 6: end for
Output: φ̄K = 1m ∑m j=1 φ K j .
Comparing the reformulated schemes, we see that their iterations can be generalized as follows:
x = (1− β) · z + β · y, z+ = proxαh{z − α · ∇fi(x)}, y+ = (1− β) · z+ + β · y.
(6)
This type of scheme is first proposed in Auslender & Teboulle (2006), which represents one of the simplest variants of the Nesterov’s methods (see Tseng (2008) for other variants). The scheme is then modified into various settings (Hu et al., 2009; Lan, 2012; Ghadimi & Lan, 2012; 2016; Zhou et al., 2019; Lan et al., 2019) to achieve acceleration. The following lemma serves as a cornerstone for the convergence proofs of AM1-SGD and AM2-SGD.
Lemma 1. If α(1− β) < 1/L, the update scheme (6) satisfies the following recursion:
1 1− β ( F (y+)− F (x?) ) ≤ β 1− β ( F (y)− F (x?) ) + 1 2α ( ‖z − x?‖2 − ∥∥z+ − x?∥∥2) + (‖∇f(x)−∇fi(x)‖+M)2
2(α−1 − L(1− β)) + 〈∇f(x)−∇fi(x), z − x?〉 .
B.2.1 PROOF OF LEMMA 1
This Lemma is similarly provided in Lan (2012); Ghadimi & Lan (2012) under a more general setting that allows non-Euclidean norms in the assumptions, we give a proof here for completeness.
Based on the convexity (Assumption (a)), we have
f(x)− f(x?) ≤ 〈∇f(x), x− z〉︸ ︷︷ ︸ R0 + 〈∇f(x)−∇fi(x), z − x?〉︸ ︷︷ ︸ R1
+ 〈 ∇fi(x), z − z+ 〉︸ ︷︷ ︸ R2
+ 〈 ∇fi(x), z+ − x? 〉︸ ︷︷ ︸ R3 . (7)
We upper bound the terms on the right side one-by-one.
For R0,
R0 (?) =
β 1− β 〈∇f(x), y − x〉 ≤ β 1− β ( f(y)− f(x) ) , (8)
where (?) uses the relation between x and z, i.e., (1− β) · (x− z) = β · (y − x). For R2, based on Assumption (a), we have
f(y+)− f(x) + 〈 ∇f(x), x− y+ 〉 ≤ L
2
∥∥x− y+∥∥2 +M ∥∥x− y+∥∥ .
Then, noting that x− y+ = (1− β) · (z − z+), we can arrange the above inequality as
R2 ≤ L(1− β)
2
∥∥z − z+∥∥2 + 1 1− β ( f(x)− f(y+) ) + 〈 ∇f(x)−∇fi(x), z+ − z 〉 +M
∥∥z − z+∥∥ ≤ L(1− β)
2
∥∥z − z+∥∥2 + 1 1− β ( f(x)− f(y+) ) + ( ‖∇f(x)−∇fi(x)‖+M ) ∥∥z − z+∥∥ . Using Young’s inequality with ζ > 0, we obtain
R2 ≤ L(1− β) + ζ
2
∥∥z − z+∥∥2 + 1 1− β ( f(x)− f(y+) ) + (‖∇f(x)−∇fi(x)‖+M)2 2ζ . (9)
For R3, based on the optimality condition of proxαh{z − α · ∇fi(x)} and denoting ∂h(z+) as a subgradient of h at z+, we have for any u ∈ X ,〈
α · ∂h(z+) + z+ − z + α · ∇fi(x), u− z+ 〉 ≥ 0,〈
∇fi(x), z+ − u 〉 ≤ 〈 ∂h(z+), u− z+ 〉 + 1
α
〈 z+ − z, u− z+ 〉 ≤ h(u)− h(z+) + 1
α
〈 z+ − z, u− z+ 〉 .
Choosing u = x?,
R3 ≤ h(x?)− h(z+) + 1
α
〈 z+ − z, x? − z+ 〉 (?) = h(x?)− h(z+) + 1
2α
( ‖z − x?‖2 − ∥∥z+ − x?∥∥2 − ∥∥z+ − z∥∥2) , (10) where (?) follows from ‖a+ b‖2 = ‖a‖2 + ‖b‖2 + 2 〈a, b〉. Finally, by upper bounding (7) using (8), (9), (10), we conclude that
f(x)− f(x?) ≤ R1 + β 1− β ( f(y)− f(x) ) + L(1− β) + ζ − α−1 2 ∥∥z − z+∥∥2 + 1
1− β ( f(x)− f(y+) ) + h(x?)− h(z+) + (‖∇f(x)−∇fi(x)‖+M) 2 2ζ
+ 1
2α
( ‖z − x?‖2 − ∥∥z+ − x?∥∥2) , After simplification,
1 1− β ( f(y+)− f(x?) ) ≤ β 1− β ( f(y)− f(x?) ) + L(1− β) + ζ − α−1 2 ∥∥z − z+∥∥2 + h(x?)− h(z+) + (‖∇f(x)−∇fi(x)‖+M) 2
2ζ +R1
+ 1
2α
( ‖z − x?‖2 − ∥∥z+ − x?∥∥2) . (11)
Note that with the convexity of h and y+ = (1− β) · z+ + β · y, we have h(y+) ≤ (1− β)h(z+) + βh(y),
h(z+) ≥ 1 1− β h(y+)− β 1− β h(y).
Using the above inequality and choosing ζ = α−1 − L(1 − β) > 0 ⇒ α(1 − β) < 1/L, we can arrange (11) as
1 1− β ( F (y+)− F (x?) ) ≤ β 1− β ( F (y)− F (x?) ) + 1 2α ( ‖z − x?‖2 − ∥∥z+ − x?∥∥2) + (‖∇f(x)−∇fi(x)‖+M)2
2(α−1 − L(1− β)) +R1.
B.2.2 PROOF OF THEOREM 1A
Using Assumption (c), Lemma 1 with x = xk z = zk z+ = zk+1 y = x̃s y+ = xk+1 α = αs β = βs , (12)
and taking expectation, if αs(1− βs) < 1/L, we have 1 1− βs ( Eik [F (xk+1)]− F (x?) ) + 1 2αs Eik [ ‖zk+1 − x?‖2 ] ≤ βs
1− βs ( F (x̃s)− F (x?) ) + 1 2αs ‖zk − x?‖2 +
(σ +M)2
2(α−1s − L(1− βs)) .
Summing the above inequality from k = sm, . . . , sm+m− 1, we obtain
1
(1− βs)m m∑ j=1 ( E [F (xsm+j)]− F (x?) ) + 1 2αsm E [∥∥z(s+1)m − x?∥∥2]
≤ βs 1− βs
( F (x̃s)− F (x?) ) + 1
2αsm ‖zsm − x?‖2 +
(σ +M)2
2(α−1s − L(1− βs)) ,
Using the definition of x̃s+1 and convexity,
αs 1− βs
( E [F (x̃s+1)]− F (x?) ) + 1 2m E [∥∥z(s+1)m − x?∥∥2]
≤ αsβs 1− βs
( F (x̃s)− F (x?) ) + 1
2m ‖zsm − x?‖2 +
αs(σ 2 +M2)
α−1s − L(1− βs) .
(13)
It can be verified that with the choices βs = ss+2 and αs = λ1 L(1−βs) , the following holds for s ≥ 0,
αs+1βs+1 1− βs+1 ≤ αs 1− βs and β0 = 0. (14)
Note that since our analysis aims at providing intuition, we do not refine the choice of αs as in (Hu et al., 2009; Ghadimi & Lan, 2012). Thus, by telescoping (13) from s = S − 1, . . . , 0, we obtain
αS−1 1− βS−1
( E [F (x̃S)]− F (x?) ) + 1 2m E [ ‖zSm − x?‖2 ] ≤ 1
2m ‖x0 − x?‖2 + S−1∑ s=0 αs(σ 2 +M2) α−1s − L(1− βs) ,
and thus,
E [F (x̃S)]− F (x?) ≤ 2L
λ1m(S + 1)2 ‖x0 − x?‖2 +
4L(σ2 +M2)
λ1(S + 1)2 S−1∑ s=0 α2s 1− αs(1− βs)L
(a) ≤ 2L λ1m(S + 1)2 ‖x0 − x?‖2 + 3λ1(σ 2 +M2) L(S + 1)2 S−1∑ s=0 (s+ 2)2
(b) ≤ 2L λ1m(S + 1)2 ‖x0 − x?‖2 + 8λ1(σ 2 +M2)(S + 1) L ,
where (a) follows from λ1 ≤ 23 and (b) holds because 0 ≤ x 7→ (x+2) 2 is non-decreasing and thus
S−1∑ s=0 (s+ 2)2 ≤ ∫ S 0 (x+ 2)2dx ≤ (S + 2) 3 3 ≤ 8(S + 1) 3 3 .
Denoting
λ?1 , L ‖x0 − x?‖
2 √ m √ σ2 +M2(S + 1) 3 2 ,
and based on the choice of λ1 = min { 2 3 , λ ∗ 1 } , if λ∗1 ≤ 23 , we have
E [F (x̃S)]− F (x?) ≤ 8 ‖x0 − x?‖
√ σ2 +M2
m 1 2 (S + 1) 1 2
.
If λ∗1 > 2 3 ,
E [F (x̃S)]− F (x?) ≤ 3L ‖x0 − x?‖2
m(S + 1)2 +
4 ‖x0 − x?‖ √ σ2 +M2
m 1 2 (S + 1) 1 2
.
Thus, we conclude that
E [F (x̃S)]− F (x?) ≤ 3L ‖x0 − x?‖2
m(S + 1)2 +
8 ‖x0 − x?‖ √ σ2 +M2
m 1 2 (S + 1) 1 2
.
Substituting S = K/m completes the proof.
B.2.3 PROOF OF THEOREM 1B
In order to prove Theorem 1b, we need the following known result for the martingale difference (cf. Lemma 2 in Lan et al. (2012)):
Lemma 2. With N > 0, let ξ0, ξ1, . . . , ξN−1 be a sequence of i.i.d. random variables, for t = 0, . . . , N − 1, σt > 0 be a deterministic number and ψt = ψt(ξ0, . . . , ξt) be a deterministic measurable function such that Eξt [ψt] = 0 a.s. and Eξt [ exp{ψ2t /σ2t } ] ≤ exp{1} a.s.. Then for any Λ ≥ 0,
Prob N−1∑ t=0 ψt ≥ Λ √√√√N−1∑ t=0 σ2t ≤ exp{−Λ2/3}. To start with, using Lemma 1 with the parameter mapping (12), we have
1 1− βs ( F (xk+1)− F (x?) ) + 1 2αs ‖zk+1 − x?‖2
≤ βs 1− βs
( F (x̃s)− F (x?) ) + 1
2αs ‖zk − x?‖2
+ (‖∇f(xk)−∇fik(xk)‖+M)2
2(α−1s − L(1− βs)) + 〈∇f(xk)−∇fik(xk), zk − x?〉
≤ βs 1− βs
( F (x̃s)− F (x?) ) + 1
2αs ‖zk − x?‖2 +
M2
α−1s − L(1− βs)
+ ‖∇f(xk)−∇fik(xk)‖
2
α−1s − L(1− βs) + 〈∇f(xk)−∇fik(xk), zk − x?〉 .
Summing the above inequality from k = sm, . . . , sm+m− 1 and using the choice αs = λ1L(1−βs) with λ1 ≤ 23 , we obtain
αs 1− βs
( F (x̃s+1)− F (x?) ) + 1
2m ∥∥z(s+1)m − x?∥∥2 ≤ αsβs
1− βs ( F (x̃s)− F (x?) ) + 1 2m ‖zsm − x?‖2 + 3α2sM2
+ 3α2s m sm+m−1∑ k=sm ‖∇f(xk)−∇fik(xk)‖ 2 + αs m sm+m−1∑ k=sm 〈∇f(xk)−∇fik(xk), zk − x?〉.
With our parameter choices, the relations in (14) hold and thus we can telescope the above inequality from s = S − 1, . . . , 0,
αS−1 1− βS−1
( F (x̃S)− F (x?) ) ≤ 1
2m ‖x0 − x?‖2 + 3M2 S−1∑ s=0 α2s
+ 3
m K−1∑ k=0 α2bk/mc ‖∇f(xk)−∇fik(xk)‖ 2
︸ ︷︷ ︸ R4
+ 1
m K−1∑ k=0
αbk/mc 〈∇f(xk)−∇fik(xk), zk − x?〉︸ ︷︷ ︸ R5 .
(15)
Denoting V2k , ‖∇f(xk)−∇fik(xk)‖ 2, ᾱ = ∑K−1 k=0 α 2 bk/mc = m ∑S−1 s=0 α 2 s, for R4, by Jensen’s inequality, we have
E [ exp { 1
ᾱ K−1∑ k=0 α2bk/mcV 2 k/σ 2 }] ≤ 1 ᾱ K−1∑ k=0 α2bk/mcE [ exp { V2k/σ2 }] (?) ≤ exp{1},
where (?) uses the additional assumption Eik [ exp { V2k/σ2 }] ≤ exp{1}.
Then, based on Markov’s inequality, we have for any Λ ≥ 0,
Prob { exp { 1
ᾱ K−1∑ k=0 α2bk/mcV 2 k/σ 2
} ≥ exp{Λ + 1} } ≤ exp{−Λ},
Prob { R4 ≥ (Λ + 1)σ2m
S−1∑ s=0 α2s
} ≤ exp{−Λ}. (16)
For R5, since we have Eik [ αbk/mc 〈∇f(xk)−∇fik(xk), zk − x?〉 ] = 0 and
Eik
[ exp { α2bk/mc 〈∇f(xk)−∇fik(xk), zk − x ?〉2
α2bk/mcσ 2D2X
}] ≤ Eik [ exp { V2k/σ2 }] ≤ exp{1},
which is based on the “light tail” assumption, using Lemma 2, we obtain
Prob R5 ≥ ΛσDX √√√√m S−1∑
s=0
α2s ≤ exp{−Λ2/3}. (17) Combining (15), (16) and (17), based on the parameter setting (cf. (5)) and using the notation
K0(m) , 3Lm ‖x0 − x?‖2
(K +m)2 +
8 ‖x0 − x?‖ √ σ2 +M2√
K +m ,
R6 , 12Lσ2
λ1(S + 1)2 S−1∑ s=0 α2s + 4LσDX λ1(S + 1)2 √ m √√√√S−1∑ s=0 α2s,
we conclude that
Prob {F (x̃S)− F (x?) ≤ K0(m) + ΛR6} ≥ 1− (exp{−Λ2/3}+ exp{−Λ}).
For R6, using the choice of αs and λ1, we obtain
R6 ≤ 4 √ 6σDX
3 √ K +m
+ 8λ1σ
2(S + 1) L ≤ 4 √ 6σDX 3 √ K +m + 4σ2 ‖x0 − x?‖√ K +m √ σ2 +M2
≤ 4σ ( 3 ‖x0 − x?‖+ √ 6DX ) 3 √ K +m ,
which completes the proof.
B.2.4 PROOF OF THEOREM 2
Using Assumption (c), Lemma 1 with x = xjkk z = zk z+ = zk+1 y = φkjk y+ = φk+1jk α = αk β = βk ,
and taking expectation, if αk(1− βk) < 1/L, we have 1
1− βk Eik,jk
[ F (φk+1jk )− F (x ?) ] + 1
2αk Eik,jk
[ ‖zk+1 − x?‖2 ] ≤ βk
1− βk Ejk
[ F (φkjk)− F (x ?) ] + 1
2αk ‖zk − x?‖2 +
(σ +M)2
2(α−1k − L(1− βk)) .
(18)
Note that
Eik,jk [ F (φk+1jk )− F (x ?) ]
= Eik,jk m∑ j=1 ( F (φk+1j )− F (x ?) ) − m∑ j 6=jk ( F (φkj )− F (x?) ) = Eik,jk m∑ j=1 ( F (φk+1j )− F (x ?) )− Ejk m∑ j 6=jk ( F (φkj )− F (x?)
) . Dividing both sides of (18) by m and then adding 1(1−βk)mEjk [∑m j 6=jk ( F (φkj )− F (x?) )] to both sides, we obtain
1
1− βk Eik,jk 1 m m∑ j=1 F (φk+1j )− F (x ?) + 1 2αkm Eik,jk [ ‖zk+1 − x?‖2 ]
≤ − 1 m Ejk [ F (φkjk)− F (x ?) ] + 1 1− βk 1 m m∑ j=1 F (φkj )− F (x?) + 1 2αkm ‖zk − x?‖2
+ (σ +M)2
2m(α−1k − L(1− βk))
= 1− 1−βkm 1− βk 1 m m∑ j=1 F (φkj )− F (x?) + 1 2αkm ‖zk − x?‖2 + (σ +M)2 2m(α−1k − L(1− βk)) . (19)
It can be verified that with our parameters choice: βk = k/m k/m+2 and αk = λ2 L(1−βk) , the following holds for k ≥ 0,
αk+1 1− 1−βk+1m 1− βk+1 ≤ αk 1− βk and β0 = 0.
Note that since our analysis aims at providing intuition, we do not refine the choice of αs as in (Hu et al., 2009; Ghadimi & Lan, 2012). Then, we can telescope (19) from k = K − 1, . . . , 0, which results in
αK−1 1− βK−1 E 1 m m∑ j=1 F (φKj )− F (x?) + 1 2m E [ ‖zK − x?‖2 ]
≤ λ2(m− 1) Lm
( F (x0)− F (x?) ) + 1
2m ‖x0 − x?‖2 + K−1∑ k=0
αk(σ +M) 2
2m(α−1k − L(1− βk)) .
Using the definition of φ̄K and convexity, we obtain
E [ F (φ̄K)− F (x?) ] ≤ 1− βK−1
αK−1
( λ2(m− 1)
Lm
( F (x0)− F (x?) ) + 1
2m ‖x0 − x?‖2 ) +
1− βK−1 αK−1 K−1∑ k=0
αk(σ +M) 2
2m(α−1k − L(1− βk))
(a) =
4(m− 1) ( F (x0)− F (x?) ) m ( K−1 m + 2 )2 + 2L ‖x0 − x?‖2 λ2m ( K−1 m + 2
)2 + 3λ2(σ +M) 2
2Lm ( K−1 m + 2 )2 K−1∑ k=0 ( k m + 2 )2 (b)
≤ 4(m− 1)
( F (x0)− F (x?) ) m ( K−1 m + 2 )2 + 2L ‖x0 − x?‖2 λ2m ( K−1 m + 2
)2 (20) + 4λ2(σ +M) 2 ( K−1 m + 2 ) L ,
where (a) uses λ2 ≤ 23 , (b) follows from simple integration arguments and that K m + 2 ≤ 2 ( K−1 m + 2 ) since K ≥ 1,m ≥ 1.
Based on the choice of
λ2 = min 23 , L ‖x0 − x?‖√2m(σ +M) (K−1m + 2) 32 ,
(20) can be further upper bounded as E [ F (φ̄K)− F (x?) ] ≤ 4(m− 1) ( F (x0)− F (x?) ) m ( K−1 m + 2 )2 + 3L ‖x0 − x?‖2 m ( K−1 m + 2 )2 +4√2 ‖x0 − x?‖ (σ +M) m 1 2 ( K−1 m + 2 ) 1 2 .
B.3 CONNECTIONS BETWEEN AM1-SGD AND KATYUSHA
The discussion in this section aims to shed light on the understanding of the experimental results, which also shows some interesting relations between AM1-SGD and Katyusha.
The high level idea of Katyusha momentum is that it works as a “magnet” inside an epoch of SVRG updates, which “stabilizes” the iterates so as to make Nesterov’s momentum effective (Allen-Zhu, 2018). In theory, the key effect of Katyusha momentum is that it allows the tightest possible variance bound for the stochastic gradient estimator of SVRG (cf. Lemma 2.4 and its comments in AllenZhu (2018)). In this sense, we can interpret Katyusha momentum as a variance reducer that further reduces the variance of SVRG. Below we show the similarity between the construction of Katyusha and AM1-SGD, based on which we conjecture that the amortized momentum can also reduce the variance of SGD (and thus increase the robustness). However, in theory, following a similar analysis of Katyusha, we cannot guarantee a reduction of σ in the worst case.
Deriving AM1-SGD from Katyusha Katyusha has the following scheme (non-proximal, in the original notations, σ is the strong convexity parameter, cf. Algorithm 1 with Option I in Allen-Zhu (2018))12:
Initialize: x̃0 = y0 = z0 = x0, η = 13L , ω = 1 + ασ. 1: for s = 0, . . . , S − 1 do 2: Compute and store∇f(x̃s). 3: for j = 0, . . . ,m− 1 do 4: k = sm+ j. 5: xk = τ1 · zk + τ2 · x̃s + (1− τ1 − τ2) · yk. 6: ∇̃k = ∇fik(xk)−∇fik(x̃s) | 1. How does the proposed method incorporate Nesterov momentum into standard SGD for deep learning?
2. What are the advantages and disadvantages of the proposed method compared to traditional Nesterov acceleration?
3. How does the paper delineate the relationship between the proposed method and Polyak's heavy ball method?
4. What are the limitations of the experimental results presented in the paper?
5. Why does SGD still have better convergence early on, and how does the proposed method address this issue?
6. Can you explain the assumptions made in the paper and their significance in the context of dictionary learning?
7. How does the step size constraint impact the performance of the proposed method?
8. What are some of the minor comments or suggestions for improving the writing style and clarity of the paper?
9. Are there any missing references or relevant works that should be included in the paper? | Review | Review
This paper provides a new simple method to incorporate Nesterov momentum into standard SGD for deep learning, with good empirical and theoretical results. Overall I think this paper should be accepted, some minor comments follow.
At no point does Polyak's heavy ball method get mentioned, even though the variant of Nesterov acceleration you are considering is very similar to it (since the momentum parameter is fixed, which is not the usual form of Nesterov except in the strongly convex case). It would be beneficial to delineate how this is or isn't related to heavy ball.
The experiments would benefit from a wall-clock time comparison too, rather than just epochs since these new methods would be slower (but presumably not by much).
The appendix is huge with most of the technical details relegated there which I did not read fully. I think this impacts the readability significantly, though not grounds for rejection. Perhaps it suggests that a conference with a small page limit is not the best venue?
It seems that SGD still has better convergence early on. The authors suggest their method fixes this (relative to standard nesterov SGD) but it doesn't seem to be quite as good as SGD. Can you explain or discuss why this is still the case?
The assumptions require some explanation, they are just listed with no context. What are they and why are they useful?
Step size "should be constrained to O(1/L)" is misleading, you should say explicitly that step-size <= 1/L (or whatever it is depending on the algorithm).
Some of the writing is a bit strange / sloppy, e.g.:
"AM2-SGD is a bit tricky in randomness"
"However, full-batch loss is way too expensive to evaluate."
In Algorithm 1 AM1-SGD:
"xk+1 ← xk+1 + β · (˜x+ − x˜)"
doesn't parse because x_{k+1} appears twice.
Missing references:
Accelerated proximal algorithms:
*) Beck and Teboulle: A Fast Iterative Shrinkage-Thresholding Algorithm
for Linear Inverse Problems
*) Nesterov: Gradient Methods for Minimizing Composite Objective Function,
Restarting (slightly different to your approach but still relevant):
*) O'Donoghue: Adaptive Restart for Accelerated Gradient Schemes |
ICLR | Title
Amortized Nesterov's Momentum: Robust and Lightweight Momentum for Deep Learning
Abstract
Stochastic Gradient Descent (SGD) with Nesterov’s momentum is a widely used optimizer in deep learning, which is observed to have excellent generalization performance. In this work, we propose Amortized Nesterov’s Momentum, which is a special variant of Nesterov’s momentum. Compared with Nesterov’s momentum, our new momentum has more robust iterates and higher efficiency. Our empirical results show that it achieves faster early convergence and comparable final generalization performance with little-to-no tuning. Just like Nesterov’s method, the new schemes are also proved optimal in general convex setting. Our analysis sheds light on the understanding of the new variant.
1 INTRODUCTION
In recent years, Gradient Descent (GD) (Cauchy, 1847) and its variants have been widely used to solve large scale machine learning problems. Among them, Stochastic Gradient Descent (SGD) (Robbins & Monro, 1951), which replaces gradient with an unbiased stochastic gradient estimator, is a popular choice of optimizer especially for neural network training which requires lower precision. Sutskever et al. (2013) found that using SGD with Nesterov’s momentum (Nesterov, 1983; 2013b), which was originally designed to accelerate deterministic convex optimization, achieves substantial speedups for training neural networks. This finding essentially turns SGD with Nesterov’s momentum into the benchmarking method of neural network design, especially for classification tasks (He et al., 2016b;a; Zagoruyko & Komodakis, 2016; Huang et al., 2017). It is observed that in these tasks, the momentum technique plays a key role in achieving good generalization performance.
Adaptive methods (Duchi et al., 2011; Kingma & Ba, 2015; Tieleman & Hinton, 2012; Reddi et al., 2018), which are also becoming increasingly popular in the deep learning community, diagonally scale the gradient to speed up training. However, Wilson et al. (2017) show that these methods always generalize poorly compared with SGD with momentum (both classical momentum (Polyak, 1964) and Nesterov’s momentum).
In this work, we introduce Amortized Nesterov’s Momentum, which is a special variant of Nesterov’s momentum. From users’ perspective, the new momentum has only one additional integer hyper-parameter m to choose, which we call the amortization length. Learning rate and momentum parameter of this variant are strictly aligned with Nesterov’s momentum and by choosing m = 1, it recovers Nesterov’s momentum. This paper conducts an extensive study based on both empirical evaluation and convex analysis to identify the benefits of the new variant (or from users’ angle, to set m apart from 1). We list the advantages of Amortized Nesterov’s Momentum as follows:
• Increasing m improves robustness1. This is an interesting property since the new momentum not only provides acceleration, but also enhances the robustness. We provide an understanding of this property by analyzing the relation between convergence rate andm in the convex setting. • Increasing m reduces (amortized) iteration complexity. • A suitably chosen m boosts the convergence rate in the early stage of training and produces
comparable final generalization performance.
1In this work, robustness refers to the probability of an optimizer significantly deviating from its expected performance, which can be reflected by the deviations of accuracy or loss in the training process over multiple runs that start with the same initial guess.
• It is easy to tune m. The performances of the methods are stable for a wide range of m and we prove that the methods converge for any valid choice of m in the convex setting. • Ifm is not too large, the methods obtain the optimal convergence rate in general convex setting,
just like Nesterov’s method.
The new variant does have some minor drawbacks: it requires one more memory buffer, which is acceptable in most cases, and it shows some undesired behaviors when working with learning rate schedulers, which can be addressed by a small modification. Considering these pros and cons, we believe that the proposed variant can benefit many large-scale deep learning tasks.
Our high level idea is simple: the stochastic Nesterov’s momentum can be unreliable since it is provided only by the previous stochastic iterate. The iterate potentially has large variance, which may lead to a false momentum that perturbs the training process. We thus propose to use the stochastic Nesterov’s momentum based on several past iterates, which provides robust acceleration. In other words, instead of immediately using an iterate to provide momentum, we put the iterate into an “amortization plan” and use it later.
2 PRELIMINARIES: SGD AND NESTEROV’S MOMENTUM
We start with a review of SGD and Nesterov’s momentum. We discuss some subtleties in the implementation and evaluation, which contributes to the interpretation of our methods.
Notations In this paper, we use x ∈ Rd to denote the vector of model parameters. ‖·‖ and 〈·, ·〉 denote the standard Euclidean norm and inner product, respectively. Scalar multiplication for v ∈ Rd and β ∈ R is denoted as β ·v. f : Rd → R denotes the loss function to be minimized and∇f(x) represents the gradient of f evaluated at x. We denote the unbiased stochastic gradient estimator of ∇f(x) as ∇fi(x) with the random variable i independent of x (e.g., using mini-batch). We use x0 ∈ Rd to denote the initial guess.
SGD SGD has the following simple iterative scheme, where γ ∈ R denotes the learning rate:
xk+1 = xk − γ · ∇fik(xk), for k ≥ 0.
Nesterov’s momentum The original Nesterov’s accelerated gradient (with constant step) (Nesterov, 1983; 2013b) has the following scheme2 (y ∈ Rd, η, β ∈ R and y0 = x0):
yk+1 = xk − η · ∇f(xk), xk+1 = yk+1 + β · (yk+1 − yk), for k ≥ 0,
(1)
where we call β · (yk+1 − yk) the momentum. By simply replacing ∇f(xk) with ∇fik(xk), we obtain the SGD with Nesterov’s momentum, which is widely used in deep learning. To make this point clear, recall that the reformulation in Sutskever et al. (2013) (scheme (2), also the Tensorflow (Abadi et al., 2016) version) and the PyTorch (Paszke et al., 2017) version (scheme (3)) have the following schemes (v, vpt ∈ Rd and v0 = vpt0 = 0): for k ≥ 0,
(2) { vk+1 = β · vk − η · ∇fik(yk + β · vk), yk+1 = yk + vk+1.
(3) { vptk+1 = β · v pt k +∇fik(xk),
xk+1 = xk − η · (β · vptk+1 +∇fik(xk)).
Here the notations are modified based on their equivalence to scheme (1). It can be verified that schemes (2) and (3) are equivalent to (1) through vk = β−1 ·(xk−yk) and vptk = η−1β−1 ·(yk−xk), respectively (see Defazio (2018) for other equivalent forms of scheme (1)).
Interestingly, both PyTorch and Tensorflow3 track the values {xk}, which we refer to as M-SGD. This choice allows a consistent implementation when wrapped in a generic optimization layer (Defazio, 2018). However, the accelerated convergence rate (in the convex case) is built upon {yk} (Nesterov, 2013b) and {xk} may not possess such a theoretical improvement. We use OM-SGD to refer to the Original M-SGD that outputs {yk}.
2We exchange the notations of x and y in Nesterov (2013b). 3Tensorflow tracks the values {yk + β · vk} = {xk}.
SGD and M-SGD In order to study the features of momentum, in this work, we regard momentum as an add-on to plain SGD, which corresponds to fixing the learning rates4 γ = η. From the interpretation in Allen-Zhu & Orecchia (2017), η represents the learning rate for the gradient descent “inside” Nesterov’s method. To introduce the evaluation metrics of this paper, we report the results of training ResNet34 (He et al., 2016b) on CIFAR-10 (Krizhevsky et al., 2009) (our basic case study) using SGD and M-SGD in Figure 1. In this paper, all the multiple runs start with the same initial guess x0. Figure 1a shows that Nesterov’s momentum hurts the convergence in the first 60 epochs but accelerates the final convergence, which verifies the importance of momentum for achieving high accuracy. Figure 1b depicts the robustness of M-SGD and SGD, which suggests that adding Nesterov’s momentum slightly increases the uncertainty in the training process of SGD.
Train-batch loss vs. Full-batch loss In Figure 1c, train-batch loss stands for the average of batch losses forwarded in an epoch, which is commonly used to indicate the training process in deep learning. Full-batch loss is the average loss over the entire training dataset evaluated at the end of each epoch. In terms of optimizer evaluation, full-batch loss is much more informative than trainbatch loss as it reveals the robustness of an optimizer. However, full-batch loss is too expensive to evaluate and thus we only measure it on small datasets. On the other hand, test accuracy couples optimization and generalization, but since it is also evaluated at the end of the epoch, its convergence is similar to full-batch loss. Considering the basic usage of momentum in deep learning, we mainly use test accuracy to evaluate optimizers. We provide more discussion on this issue in Appendix C.2.
M-SGD vs. OM-SGD We also include OM-SGD in Figure 1a. In comparison, the final accuracies of M-SGD and OM-SGD are 94.606%± 0.152% and 94.728%± 0.111% with average deviations at 1.040% and 0.634%, respectively. This difference can be explained following the interpretation in Hinton (2012) that {xk} are the points after “jump” and {yk} are the points after “correction”.
3 AMORTIZED NESTEROV’S MOMENTUM
In this section, we formally introduce SGD with Amortized Nesterov’s Momentum (AM1-SGD) in Algorithm 1 with the following remarks:
Options It can be verified that if m = 1, AM1-SGD with Option I degenerates to M-SGD and Option II corresponds to OM-SGD. Just like the case for M-SGD and OM-SGD, the accelerated convergence rate is built upon Option II while Option I is easier to be implemented in a generic optimization layer5. Intuitively, Option I is SGD with amortized momentum and Option II applies an m-iterations tail averaging on Option I.
4Ma & Yarats (2019) observed that when effective learning rates γ = η(1 − β)−1 are fixed, M-SGD and SGD have similar performance. We provide a discussion on this observation in Appendix C.1.
5To implement Option II, we can either maintain another identical network for the shifted point x̃ or temporarily change the network parameters in the evaluation phase.
Algorithm 1 AM1-SGD Input: Initial guess x0, learning rate η, momentum β, amortization length m, iteration number K. Initialize: x← x0, x̃← x0, x̃+ ← 0 {a running average}.
1: for k = 0, . . . ,K − 1 do 2: x← x− η · ∇fik(x). 3: x̃+ ← x̃+ + 1m · x. 4: if (k + 1) mod m = 0 then 5: x← x+ β · (x̃+ − x̃). {adding amortized momentum} 6: x̃← x̃+, x̃+ ← 0. 7: end if 8: end for
Output: Option I: x, Option II: x̃. * The symbol ‘←’ denotes assignment.
Efficiency We can improve the efficiency of Algorithm 1 by maintaining a running scaled momentum ṽ+ , m · (x̃+ − x̃) instead of the running average x̃+, by replacing the following steps in Algorithm 1:
Initialize: x← x0, x̃← x0, ṽ+ ← −m · x0, Step 3: ṽ+ ← ṽ+ + x. Step 5: x← x+ (β/m) · ṽ+. Step 6: x̃← x̃+ (1/m) · ṽ+, ṽ+ ← −m · x̃.
Then, in one m-iterations loop, for each of the first m − 1 iterations, AM1-SGD requires 1 vector addition and 1 scaled vector addition. At the m-th iteration, it requires 1 vector addition, 1 scalarvector multiplication and 3 scaled vector additions. In comparison, M-SGD (standard PyTorch) requires 1 vector addition, 1 (in-place) scalar-vector multiplication and 2 scaled vector additions per iteration. Thus, as long as m > 2, AM1-SGD has lower amortized cost than M-SGD. For memory complexity, AM1-SGD requires one more auxiliary buffer than M-SGD.
Tuning m We did a parameter sweep for m in our basic case study. We plot the final and the average deviation of test accuracies over 5 runs againstm in Figure 2a. Note that m=1 corresponds to the results of M-SGD and OM-SGD, which are already given in Figure 1. From this empirical result, m introduces a trade-off between final accuracy and robustness (the convergence behaviors can be found in Appendix A.1). Figure 2a suggests that m= 5 is a good choice for this task. For simplicity, and also as a recommended setting, we fix m=5 for the rest of experiments in this paper.
A momentum that increases robustness To provide a stronger justification, we ran 20 seeds with m = 5 in Figure 2b and the detailed data are given in Figure 3 & Table 1. The results show that the amortized momentum significantly increases the robustness. Intuitively, the gap between Option I and Option II can be understood as the effect of tail averaging. However, the large gap between Option I and SGD is somewhat mysterious: what Option I does is to inject a very large momentum6 into SGD every m iterations. It turns out that this momentum not only provides acceleration, but also helps the algorithm become more robust than SGD. This observation basically differentiates AM1-SGD from a simple interpolation in-between M-SGD and SGD.
6Amortized momentum β ·(x̃+−x̃) is expected to be much large than Nesterov’s momentum β ·(yk+1−yk).
Learning rate scheduler issue We observed that when we use schedulers with a large decay factor and the momentum β is too large for the task (e.g., 0.995 for the task of this section), there would be a performance drop after the learning rate reduction. We believe that it is caused by the different cardinalities of iterates being averaged in x̃+, which leads to a false momentum. This issue is resolved by restarting the algorithm after each learning rate reduction inspired by (O’donoghue & Candes, 2015). We include more discussion and evidence in Appendix A.4.
3.1 AM2-SGD: A VARIANT WITH IDENTICAL ITERATIONS
Algorithm 2 AM2-SGD
Input: Initial guess x0, amortization lengthm, a point table φ = [φ1 · · · φm] ∈ Rd×m, learning rate η, momentum β, iteration number K. Initialize: φ0j = x0,∀j ∈ [m]*. {jk | jk ∈ [m]} K−1 k=0 is a sequence of uniformly random indexes.
If Option II is used, φ̄0 = x0. {a running average for the point table φ} 1: for k = 0, . . . ,K − 1 do 2: φk+1jk = xk − η · ∇fik(xk) and keep other entries unchanged (i.e., φ k+1 j = φ k j for j 6= jk). 3: xk+1 = φ k+1 jk + β · (φk+1jk+1 − φ k jk
). {adding amortized momentum} 4: if Option II then φ̄k+1 = φ̄k + 1m · ( φk+1jk − φ k jk ) .
5: end for Output: Option I (not recommended): xK , Option II: φ̄K . * [m] denotes the set {1, . . . ,m}.
While enjoying an improved efficiency, AM1-SGD does not have identical iterations7, which to some extent limits its extensibility to other settings (e.g., asynchronous setting). In this section, we propose a variant of Amortized Nesterov’s Momentum (AM2-SGD, Algorithm 2) to address this problem. To show the characteristics of AM2-SGD, we make the following remarks:
Trading memory for extensibility In expectation, the point table φ stores the most recent m iterations and thus the output φ̄K is an m-iterations tail average, which connects to AM1-SGD. The relation between AM1-SGD and AM2-SGD resembles that of SVRG (Johnson & Zhang, 2013) and SAGA (Defazio et al., 2014), the most popular methods in finite-sum convex optimization: to reuse the information from several past iterates, we can either maintain a “snapshot” that aggregates the information or keep the iterates in a table. A side-by-side comparison is given in Section 4.
Options and convergence As in the case of AM1-SGD, if m = 1, AM2-SGD with Option I corresponds to M-SGD and Option II is OM-SGD. In our preliminary experiments, the convergence of AM2-SGD is similar to AM1-SGD and it also has the learning rate scheduler issue. In our preliminary experiments (can be found in Appendix A), we observed that Option I is consistently worse than Option II and it does not seem to benefit from increasingm. Thus, we do not recommend using Option I. We also set m = 5 for AM2-SGD for its evaluation due to the similarity.
7For AM1-SGD, the workload varies for different iteration k due to the if-clause at Step 4.
Additional randomness {jk} In our implementation, at each iteration, we sample an index in [m] as jk+1 and obtain the stored index jk. We observed that with Option I, AM2-SGD has much larger deviations than AM1-SGD, which we believe is caused by the additional random indexes {jk}.
4 CONVERGENCE RESULTS
The original Nesterov’s accelerated gradient is famous for its optimal convergence rates for solving convex problems. In this section, we analyze the convergence rates for AM1-SGD and AM2-SGD in the convex case, which explicitly model the effect of amortization (i.e., m). While these rates do not hold for deep learning problems in general, they help us understand the observed convergence behaviors of the proposed methods, especially on how they differ from M-SGD (m = 1). Moreover, the analysis also provides intuition on tuning m. Since the original Nesterov’s method is deterministic (Nesterov, 1983; 2013b), we follow the setting of its stochastic variants (Lan, 2012; Ghadimi & Lan, 2012), in which Nesterov’s acceleration also achieves the optimal rates.
We consider the following convex composite problem (Beck & Teboulle, 2009; Nesterov, 2013a):
min x∈X
{ F (x) , f(x) + h(x) } , (4)
whereX ⊆ Rd is a non-empty closed convex set and h is a proper convex function with its proximal operator proxαh(·)8 available. We impose the following assumptions on the regularity of f and the stochastic oracle∇fi (identical to the ones in Ghadimi & Lan (2012) with µ = 0): Assumptions. For some L ≥ 0,M ≥ 0, σ ≥ 0,
(a) 0 ≤ f(y)− f(x)− 〈∇f(x), y − x〉 ≤ L2 ‖y − x‖ 2 +M ‖y − x‖ ,∀x, y ∈ X.9
(b) Ei [∇fi(x)] = ∇f(x),∀x ∈ X. (c) Ei [ ‖∇fi(x)−∇f(x)‖2 ] ≤ σ2,∀x ∈ X.
The notation Eik [ · ] is E [ · | (i0, . . . , ik−1)] for a random process i0, i1, . . .. These assumptions cover several important classes of convex problems. For example, (a) covers the cases of f being L-smooth (M = 0) or L0-Lipschitz continuous (M = 2L0, L = 0) convex functions and if σ = 0 in (c), the assumptions cover several classes of deterministic convex programming problems. We denote x? ∈ X as a solution to problem (4) and x0 ∈ X as the initial guess. Unlike its usage in deep learning, the momentum parameter β is always a variable in general convex analysis. For the simplicity of analysis, we reformulate AM1-SGD (Algorithm 1) and AM2-SGD (Algorithm 2) into the following schemes10(z ∈ X,α ∈ R):
AM1-SGD (reformulated, proximal)
Initialize: x̃0 = z0 = x0, S = K/m. 1: for s = 0, . . . , S − 1 do 2: for j = 0, . . . ,m− 1 do 3: k = sm+ j. 4: xk = (1− βs) · zk + βs · x̃s. 5: zk+1 = proxαsh {zk − αs ·∇fik(xk)}. 6: (xk+1 = (1− βs) · zk+1 + βs · x̃s.) 7: end for 8: x̃s+1 = 1 m ∑m j=1 xsm+j .
9: end for Output: x̃S .
AM2-SGD (reformulated, proximal)
Initialize: z0 = φ0j = x0,∀j ∈ [m]. 1: for k = 0, . . . ,K − 1 do 2: Sample jk uniformly in [m]. 3: xjkk = (1− βk) · zk + βk · φkjk . 4: zk+1 = proxαkh {zk − αk ·∇fik(x jk k )}.
5: φk+1jk = (1− βk) · zk+1 + βk · φ k jk . 6: end for
Output: φ̄K = 1m ∑m j=1 φ K j .
We show in Appendix B.1 that when h ≡ 0 and β is a constant, the reformulated schemes AM1SGD and AM2-SGD are equivalent to Algorithm 1 and Algorithm 2 through αs = η(1−βs)−1 and
8∀x ∈ Rd, proxαh(x) , argminu∈X { 1 2 ‖u− x‖2 + αh(u) } , see Parikh et al. (2014).
9When M > 0, f is not necessarily differentiable and we keep using the notation ∇f(x) to denote an arbitrary subgradient of f at x for consistency.
10For simplicity, we assume K is divisible by m.
αk = η(1 − βk)−1. These reformulations are basically how Nesterov’s momentum was migrated into deep learning (Sutskever et al., 2013). Then we establish the convergence rates for AM1-SGD and AM2-SGD as follows. All the proofs in this paper are given in Appendix B.2. Theorem 1. For the reformulated AM1-SGD, suppose we choose
βs = s
s+ 2 and αs = λ1 L(1− βs)
with λ1 = min { 2
3 , L ‖x0 − x?‖ 2 √ m √ σ2 +M2(S + 1) 3 2
} . (5)
Then,
(a) The output x̃S satisfies
E [F (x̃S)]− F (x?) ≤ 3Lm ‖x0 − x?‖2
(K +m)2 +
8 ‖x0 − x?‖ √ σ2 +M2√
K +m , K0(m).
(b) If the variance has a “light tail”, i.e., Ei [ exp { ‖∇fi(x)−∇f(x)‖2/σ2 }] ≤exp{1},∀x ∈
X , and X is compact, denoting DX , maxx∈X ‖x− x?‖, for any Λ ≥ 0, we have
Prob { F (x̃S)− F (x?) ≤ K0(m) + 4Λσ ( 3 ‖x0 − x?‖+ √ 6DX ) 3 √ K +m } ≥ 1− ( exp{−Λ2/3}+ exp{−Λ} ) .
Remarks: (a) Regarding K0(m), its minimum is obtained at either m = 1 or m = K. Note that for AM1-SGD,m is strictly constrained in {1, . . . ,K}. It can be verified that whenm = K, AM1-SGD becomes the modified mirror descent SA (Lan, 2012), or under the Euclidean setting, the SGD that outputs the average of the whole history, which is rarely used in practice. In this case, the convergence rate in Theorem 1a becomes the corresponding O(L/K + (σ +M)/ √ K) (cf. Theorem 1 in Lan (2012)). Thus, we can regard AM1-SGD as a smooth transition between AC-SA and the modified mirror descent SA. (b) The additional compactness and “light tail” assumptions are similarly required in Nemirovski et al. (2009); Lan (2012); Ghadimi & Lan (2012). Recently, Juditsky et al. (2019) established similar bounds under weaker assumptions by truncating the gradient. However, as indicated by the authors, their technique cannot be used for accelerated algorithms due to the accumulation of bias.
Understandings: Theorem 1a gives the expected performance in terms of full-batch loss F (x̃) − F (x?), from which the trade-off of m is clear: Increasing m improves the dependence on variance σ but deteriorates the O(L/K2) term (i.e., the acceleration). Based on this trade-off, we can understand the empirical results in Figure 2b: the faster convergence in the early stage could be the result of a better control on σ and the slightly lowered final accuracy is possibly caused by the reduced acceleration effect. Theorem 1b provides the probability of the full-batch loss deviating from its expected performance (i.e., K0(m)). It is clear that increasing m leads to smaller deviations with the same probability, which sheds light on the understanding of the increased robustness observed in Figure 2. Since the theorem is built on the full-batch loss, we did an experiments based on this
metric in Figure 4 & Table 2. Here we choose training a smaller ResNet18 with pre-activation (He et al., 2016a) on CIFAR-10 as the case study (the test accuracy is reported in Appendix A.5).
For AM2-SGD, we only give the expected convergence results as follows. Theorem 2. For the reformulated AM2-SGD, if we choose
βk = k/m
k/m+ 2 and αk = λ2 L(1− βk) with λ2 = min 23 , L ‖x0 − x?‖√2m(σ +M) (K−1m + 2) 32 ,
the output φ̄K satisfies E [ F (φ̄K) ] − F (x?)≤ 4(m2−m) ( F (x0)−F (x?) ) +3Lm ‖x0−x?‖2
(K + 2m− 1)2 +
4 √
2 ‖x0−x?‖ (σ+M)√ K + 2m− 1 .
Remark: In comparison with Theorem 1a, Theorem 2 has an additional term F (x0)− F (x?) in the upper bound, which is inevitable. This difference comes from different restrictions on the choice of m. For AM2-SGD,m ≥ 1 is the only requirement. Since it is impossible to letm K to obtain an improved rate, this additional term is inevitable. As a sanity check, we can let m → ∞ to obtain a point table with almost all x0, and then the upper bound becomes exactly F (x0)− F (x?). In some cases, there exists an optimal choice of m > 1 in Theorem 2. However, the optimal choice could be messy and thus we omit the discussion here.
Understanding: Comparing the rates, we see that when using the same m, AM2-SGD has slightly better dependence on σ, which is related to the observation in Figure 5 that AM2-SGD is always slightly faster than AM1-SGD. This difference is suggesting that randomly incorporating past iterates beyond m iterations helps. If m = O(1), Theorems 1 and 2 establish the optimal O(L/K2 + (σ + M)/ √ K) rate in the convex setting (see Lan (2012) for optimality), which verifies AM1-SGD and AM2-SGD as variants of the Nesterov’s method (Nesterov, 1983; 2013b). From the above analysis, the effect of m can be understood as trading acceleration for variance control. However, since both acceleration and variance control boost the convergence speed, the reduced final performance observed in the CIFAR experiments may not always be the case as will be shown in Figure 5 and Table 3.
Connections with Katyusha Our original inspiration of AM1-SGD comes from the construction of Katyusha (Allen-Zhu, 2018), the recent breakthrough in finite-sum convex optimization, which uses a previously calculated “snapshot” point to provide momentum, i.e., Katyusha momentum. AM1-SGD also uses an aggregated point to provide momentum and it shares many structural similarities with Katyusha. We refer the interested readers to Appendix B.3.
5 PERFORMANCE EVALUATION
In this section, we evaluate AM1-SGD and AM2-SGD on more deep learning tasks. Our goal is to show their potentials of serving as alternatives for M-SGD. Regarding the options: for AM1-SGD, Option I is a nice choice, which has slightly better final performance as shown in Table 1; for AM2SGD, Option I is not recommended as mentioned before. Here we choose to evaluate Option II for both methods for consistency, which also corresponds to the analysis in Section 4. AM1-SGD and AM2-SGD use exactly the same values for (η, β) as M-SGD, which was tuned to optimize the performance of M-SGD. We set m = 5 for AM1-SGD and AM2-SGD.
We trained ResNet50 and ResNet152 (He et al., 2016b) on the ILSVRC2012 dataset (“ImageNet”) (Russakovsky et al., 2015) shown in Figure 5b. For this task, we used 0.1 initial learning rate and 0.9 momentum for all methods, which is a typical choice. We performed a restart after each learning rate reduction as discussed in Appendix A.4. We believe that this helps the training process and also does not incur any additional overhead. We report the final accuracy in Table 3.
We also did a language model experiment on Penn Treebank dataset (Marcus et al., 1993). We used the LSTM (Hochreiter & Schmidhuber, 1997) model defined in Merity et al. (2017) and followed the experimental setup in its released code. We only changed the learning rate and momentum in
the setup. The baseline is SGD+ASGD11 (Polyak & Juditsky, 1992) with constant learning rate 30 as used in Merity et al. (2017). For the choice of (η, β), following Lucas et al. (2019), we chose β = 0.99 and used the scheduler that reduces the learning rate by half when the validation loss has not decreased for 15 epochs. We swept η from {5, 2.5, 1, 0.1, 0.01} and found that η = 2.5 resulted in the lowest validation perplexity for M-SGD. We thus ran AM1-SGD and AM2-SGD with this (η, β) and m = 5. Due to the small decay factor, we did not restart AM1-SGD and AM2-SGD after learning rate reductions. The validation perplexity curve is plotted in Figure 5a. We report validation perplexity and test perplexity in Table 3. This experiment is directly comparable with the one in Lucas et al. (2019).
Extra results are provided in the appendices for interested readers: the robustness when using large β (Appendix A.2), a CIFAR-100 experiment (Appendix A.6) and comparison with classical momentum (Polyak, 1964), AggMo (Lucas et al., 2019) and QHM (Ma & Yarats, 2019) (Appendix A.3).
6 CONCLUSIONS
We presented Amortized Nesterov’s Momentum, which is a special variant of Nesterov’s momentum that utilizes several past iterates to provide the momentum. Based on this idea, we designed two different realizations, namely, AM1-SGD and AM2-SGD. Both of them are simple to implement with little-to-no additional tuning overhead over M-SGD. Our empirical results demonstrate that switching to AM1-SGD and AM2-SGD produces faster early convergence and comparable final generalization performance. AM1-SGD is lightweight and has more robust iterates than M-SGD, and thus can serve as a favorable alternative to M-SGD in large-scale deep learning tasks. AM2-SGD could be favorable for more restrictive tasks (e.g., asynchronous training) due to its extensibility and good performance. Both the methods are proved optimal in the convex case, just like M-SGD. Based on the intuition from convex analysis, the proposed methods are trading acceleration for variance control, which provides hints for the hyper-parameter tuning.
11SGD+ASGD is to run SGD and switch to averaged SGD (ASGD) when a threshold is met.
Appendices
A Extra Experimental Results 14
A.1 The effect of m on convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
A.2 Robustness on large momentum parameters . . . . . . . . . . . . . . . . . . . . . 15
A.3 Comparison with other momentum . . . . . . . . . . . . . . . . . . . . . . . . . . 15
A.4 Issues with learning rate schedulers . . . . . . . . . . . . . . . . . . . . . . . . . 17
A.5 Test accuracy results of Figure 4 & Table 2 . . . . . . . . . . . . . . . . . . . . . 17
A.6 CIFAR-100 experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
A.7 A sanity check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
B Missing parts in Section 4 19
B.1 The reformulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
B.2 Proofs of Theorem 1 and Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . 20
B.2.1 Proof of Lemma 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
B.2.2 Proof of Theorem 1a . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
B.2.3 Proof of Theorem 1b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
B.2.4 Proof of Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
B.3 Connections between AM1-SGD and Katyusha . . . . . . . . . . . . . . . . . . . 27
C Miscellanies 28
C.1 Comparison of SGD and M-SGD . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
C.2 Training evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
D Experimental Setup 29
D.1 Classification Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
D.2 Language Model Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
A EXTRA EXPERIMENTAL RESULTS
In this appendix, we provide more experimental results to further evaluate the Amortized Nesterov’s Momentum. Table 4 shows the detailed data of the parameter sweep experiments, where the convergence curves of these results are given in Appendix A.1. In Appendix A.2, we compare the robustness of AM1-SGD and M-SGD on large momentum parameters. In Appendix A.3, we empirically compare the Amortized Nesterov’s Momentum with classical momentum (Polyak, 1964), aggregated momentum (Lucas et al., 2019) and quasi-hyperbolic momentum (Ma & Yarats, 2019). We discuss the issues with learning rate schedulers in Appendix A.4. We report the test accuracy results of the ResNet18 experiment (in Section 4) in Appendix A.5. A CIFAR-100 experiment is provided in Appendix A.6. We also provide a sanity check for our implementation in Appendix A.7.
METHOD DESCRIPTION FINAL ACCURACY Avg. STD
A.1 THE EFFECT OF m ON CONVERGENCE
We show in Figure 6 how m affects the convergence of test accuracy. The results show that increasing m speeds up the convergence in the early stage. While for AM1-SGD the convergences of Option I and Option II are similar, AM2-SGD with Option II is consistently better than with Option I in this experiment. It seems that AM2-SGD with Option I does not benefit from increasing m and the algorithm is not robust. Thus, we do not recommend using Option I for AM2-SGD.
A.2 ROBUSTNESS ON LARGE MOMENTUM PARAMETERS
We compare the robustness of M-SGD and AM1-SGD when β is large in Figure 7 & Table 5. For fair comparison, AM1-SGD uses Option I. As we can see, the STD error of M-SGD scales up significantly when β is larger and the performance is more affected by a large β compared with AM1-SGD.
A.3 COMPARISON WITH OTHER MOMENTUM
In this section, we compare AM1-SGD (Option I) with classical momentum (Polyak, 1964), AggMo (Lucas et al., 2019) and QHM (Ma & Yarats, 2019) in our basic case study (training ResNet34 on
CIFAR-10). Since we are not aware of what makes a fair comparison with these methods (e.g., it is not clear what is the effective learning rate for AM1-SGD), we compare them based on the default hyper-parameter settings suggested by their papers.
Classical Momentum The SGD with classical momentum (CM-SGD) that is widely used in deep learning has the following scheme (standard PyTorch) (vcm ∈ Rd, vcm0 = 0):
vcmk+1 = β · vcmk +∇fik(xk), xk+1 = xk − η · vcmk+1, for k ≥ 0.
CM-SGD with its typical hyper-parameter settings (η0 = 0.1, β = 0.9) is observed to achieve similar generalization performance as M-SGD. However, CM-SGD is more unstable and prone to oscillations (Lucas et al., 2019), which makes it less robust than M-SGD as shown in Table 6.
Aggregated Momentum (AggMo) AggMo combines multiple momentum buffers, which is inspired by the passive damping from physics literature (Lucas et al., 2019). AggMo uses the following update rules (for t = 1, . . . , T , v(t) ∈ Rd, v(t)0 = 0):
v (t) k+1 = β (t) · v(t)k −∇fik(xk), for t = 1, . . . , T,
xk+1 = xk + η T · T∑ t=1 v (t) k+1, for k ≥ 0.
We used the exponential hyper-parameter setting recommended in the original work with the scalefactor a = 0.1 fixed, β(t) = 1− at−1, for t = 1, . . . , T and choosing T in {2, 3, 4}. We found that T = 2 gave the best performance in this experiment. As shown in Figure 8 & Table 6, with the help of passive damping, AggMo is more stable and robust compared with CM-SGD.
Quasi-hyperbolic Momentum (QHM) Ma & Yarats (2019) introduce the immediate discount factor ν ∈ R for the momentum scheme, which results in the QHM update rules (α ∈ R, vqh ∈ Rd, vqh0 = 0):
vqhk+1 = β · v qh k + (1− β) · ∇fik(xk),
xk+1 = xk − α · (ν · vqhk+1 + (1− ν) · ∇fik(xk)), for k ≥ 0. Here we used the recommended hyper-parameter setting for QHM (α0 = 1.0, β = 0.999, ν = 0.7).
Figure 8 shows that AM1-SGD, AggMo and QHM achieve faster convergence in the early stage while CM-SGD has the highest final accuracy. In terms of robustness, huge gaps are observed when comparing AM1-SGD with the remaining methods in Table 6. Note that AM1-SGD is more efficient than both QHM and AggMo, and is as efficient as CM-SGD.
We also plot the convergence of train-batch loss for all the methods in Figure 9. Despite of showing worse generalization performance, both QHM and AggMo perform better on reducing the trainbatch loss in this experiment, which is consistent with the results reported in Ma & Yarats (2019); Lucas et al. (2019).
A.4 ISSUES WITH LEARNING RATE SCHEDULERS
We show in Figure 10 that when β is large for the task, using step learning rate scheduler with decay factor 10, a performance drop is observed after each reduction. Both Option I and Option II have this issue and the curves are basically identical. Here we only use Option II. We fix this issue by performing a restart after each learning rate reduction (labeled with ‘+’). We plot the train-batch loss here because we find the phenomenon is clearer in this way. If β = 0.9, there is no observable performance drop in this experiment.
For smooth-changing schedulers such as the cosine annealing scheduler (Loshchilov & Hutter, 2016), the amortized momentum works well as shown in Figure 11.
A.5 TEST ACCURACY RESULTS OF FIGURE 4 & TABLE 2
We report the test accuracy results of the experiments in Section 4 in Figure 12 & Table 7. These results are reminiscent of the ResNet34 experiments (Figure 3 & Table 1).
A.6 CIFAR-100 EXPERIMENT
We report the results of training DenseNet121 (Huang et al., 2017) on CIFAR-100 in Figure 13, which shows that both AM1-SGD and AM2-SGD perform well before the final learning rate reduction. However, the final accuracies are lowered around 0.6% compared with M-SGD. We also notice that SGD reduces the train-batch loss at an incredibly fast rate and the losses it reaches are consistently lower than other methods in the entire 300 epochs. However, this performance is not
reflected in the convergence of test accuracy. We believe that this phenomenon suggests that the DenseNet model is actually “overfitting” M-SGD (since in the ResNet experiments, M-SGD always achieves a lower train loss than SGD after the final learning rate reduction).
A.7 A SANITY CHECK
When m = 1, both AM1-SGD and AM2-SGD are equivalent to M-SGD, we plot their convergence in Figure 14 as a sanity check (the detailed data is given in Table 4).
We observed that when m = 1, both AM1-SGD and AM2-SGD have a lower STD error than M-SGD. We believe that it is because they both maintain the iterates without scaling, which is numerically more stable than M-SGD (M-SGD in standard PyTorch maintains a scaled buffer, i.e., vptk = η −1β−1 · (yk − xk)).
B MISSING PARTS IN SECTION 4
B.1 THE REFORMULATIONS
When h ≡ 0 and β is a constant, we do the reformulations by eliminating the sequence {zk}. For the reformulated AM2-SGD,
xjkk = (1− β) · zk + β · φ k jk ,
zk+1 = zk − α · ∇fik(x jk k ),
φk+1jk = (1− β) · zk+1 + β · φ k jk ,(
x jk+1 k+1 = (1− β) · zk+1 + β · φ k+1 jk+1
) .
The reformulated AM2-SGD
α(1− β) = η Eliminate {zk} =========⇒
φk+1jk = x jk k − η · ∇fik(x jk k ),
x jk+1 k+1 = φ k+1 jk + β · ( φk+1jk+1 − φ k jk ) .
Algorithm 2
For the reformulated AM1-SGD, when h ≡ 0, the inner loops are basically SGD,
xk = (1− β) · zk + β · x̃s, zk+1 = zk − α · ∇fik(xk),
(xk+1 = (1− β) · zk+1 + β · x̃s.)
α(1− β) = η Eliminate {zk} =========⇒ xk+1 = xk − η · ∇fik(xk).
At the end of each inner loop (i.e., when (k + 1) mod m = 0), we have
x(s+1)m = (1− β) · z(s+1)m + β · x̃s,
while at the beginning of the next inner loop,
x(s+1)m = (1− β) · z(s+1)m + β · x̃s+1,
which means that we need to set xk+1 ← xk+1 + β · (x̃s+1 − x̃s) (reassign the value of xk+1). We also give the reformulation of M-SGD (scheme (1)) to the Auslender & Teboulle (2006) scheme for reference:
xk = (1− β) · zk + β · yk, zk+1 = zk − α · ∇fik(xk), yk+1 = (1− β) · zk+1 + β · yk,( xk+1 = (1− β) · zk+1 + β · yk+1 ) .
Auslender & Teboulle (2006) (AC-SA (Lan, 2012))
α(1− β) = η Eliminate {zk} =========⇒
yk+1 = xk − η · ∇fik(xk), xk+1 = yk+1 + β · (yk+1 − yk).
Nesterov (1983; 2013b)
AC-SA (in the Euclidean case) maps to the Auslender & Teboulle (2006) scheme through (in the original notations) x = xmd z = x y = xag
1− β = β−1t α = γt
.
Intuition for the Auslender & Teboulle (2006) scheme can be found in Remark 2 in Lan (2012).
B.2 PROOFS OF THEOREM 1 AND THEOREM 2
The reformulated schemes are copied here for reference:
AM1-SGD (reformulated, proximal)
Initialize: x̃0 = z0 = x0, S = K/m. 1: for s = 0, . . . , S − 1 do 2: for j = 0, . . . ,m− 1 do 3: k = sm+ j. 4: xk = (1− βs) · zk + βs · x̃s. 5: zk+1 = proxαsh {zk − αs ·∇fik(xk)}. 6: (xk+1 = (1− βs) · zk+1 + βs · x̃s.) 7: end for 8: x̃s+1 = 1 m ∑m j=1 xsm+j .
9: end for Output: x̃S .
AM2-SGD (reformulated, proximal)
Initialize: z0 = φ0j = x0,∀j ∈ [m]. 1: for k = 0, . . . ,K − 1 do 2: Sample jk uniformly in [m]. 3: xjkk = (1− βk) · zk + βk · φkjk . 4: zk+1 = proxαkh {zk − αk ·∇fik(x jk k )}.
5: φk+1jk = (1− βk) · zk+1 + βk · φ k jk . 6: end for
Output: φ̄K = 1m ∑m j=1 φ K j .
Comparing the reformulated schemes, we see that their iterations can be generalized as follows:
x = (1− β) · z + β · y, z+ = proxαh{z − α · ∇fi(x)}, y+ = (1− β) · z+ + β · y.
(6)
This type of scheme is first proposed in Auslender & Teboulle (2006), which represents one of the simplest variants of the Nesterov’s methods (see Tseng (2008) for other variants). The scheme is then modified into various settings (Hu et al., 2009; Lan, 2012; Ghadimi & Lan, 2012; 2016; Zhou et al., 2019; Lan et al., 2019) to achieve acceleration. The following lemma serves as a cornerstone for the convergence proofs of AM1-SGD and AM2-SGD.
Lemma 1. If α(1− β) < 1/L, the update scheme (6) satisfies the following recursion:
1 1− β ( F (y+)− F (x?) ) ≤ β 1− β ( F (y)− F (x?) ) + 1 2α ( ‖z − x?‖2 − ∥∥z+ − x?∥∥2) + (‖∇f(x)−∇fi(x)‖+M)2
2(α−1 − L(1− β)) + 〈∇f(x)−∇fi(x), z − x?〉 .
B.2.1 PROOF OF LEMMA 1
This Lemma is similarly provided in Lan (2012); Ghadimi & Lan (2012) under a more general setting that allows non-Euclidean norms in the assumptions, we give a proof here for completeness.
Based on the convexity (Assumption (a)), we have
f(x)− f(x?) ≤ 〈∇f(x), x− z〉︸ ︷︷ ︸ R0 + 〈∇f(x)−∇fi(x), z − x?〉︸ ︷︷ ︸ R1
+ 〈 ∇fi(x), z − z+ 〉︸ ︷︷ ︸ R2
+ 〈 ∇fi(x), z+ − x? 〉︸ ︷︷ ︸ R3 . (7)
We upper bound the terms on the right side one-by-one.
For R0,
R0 (?) =
β 1− β 〈∇f(x), y − x〉 ≤ β 1− β ( f(y)− f(x) ) , (8)
where (?) uses the relation between x and z, i.e., (1− β) · (x− z) = β · (y − x). For R2, based on Assumption (a), we have
f(y+)− f(x) + 〈 ∇f(x), x− y+ 〉 ≤ L
2
∥∥x− y+∥∥2 +M ∥∥x− y+∥∥ .
Then, noting that x− y+ = (1− β) · (z − z+), we can arrange the above inequality as
R2 ≤ L(1− β)
2
∥∥z − z+∥∥2 + 1 1− β ( f(x)− f(y+) ) + 〈 ∇f(x)−∇fi(x), z+ − z 〉 +M
∥∥z − z+∥∥ ≤ L(1− β)
2
∥∥z − z+∥∥2 + 1 1− β ( f(x)− f(y+) ) + ( ‖∇f(x)−∇fi(x)‖+M ) ∥∥z − z+∥∥ . Using Young’s inequality with ζ > 0, we obtain
R2 ≤ L(1− β) + ζ
2
∥∥z − z+∥∥2 + 1 1− β ( f(x)− f(y+) ) + (‖∇f(x)−∇fi(x)‖+M)2 2ζ . (9)
For R3, based on the optimality condition of proxαh{z − α · ∇fi(x)} and denoting ∂h(z+) as a subgradient of h at z+, we have for any u ∈ X ,〈
α · ∂h(z+) + z+ − z + α · ∇fi(x), u− z+ 〉 ≥ 0,〈
∇fi(x), z+ − u 〉 ≤ 〈 ∂h(z+), u− z+ 〉 + 1
α
〈 z+ − z, u− z+ 〉 ≤ h(u)− h(z+) + 1
α
〈 z+ − z, u− z+ 〉 .
Choosing u = x?,
R3 ≤ h(x?)− h(z+) + 1
α
〈 z+ − z, x? − z+ 〉 (?) = h(x?)− h(z+) + 1
2α
( ‖z − x?‖2 − ∥∥z+ − x?∥∥2 − ∥∥z+ − z∥∥2) , (10) where (?) follows from ‖a+ b‖2 = ‖a‖2 + ‖b‖2 + 2 〈a, b〉. Finally, by upper bounding (7) using (8), (9), (10), we conclude that
f(x)− f(x?) ≤ R1 + β 1− β ( f(y)− f(x) ) + L(1− β) + ζ − α−1 2 ∥∥z − z+∥∥2 + 1
1− β ( f(x)− f(y+) ) + h(x?)− h(z+) + (‖∇f(x)−∇fi(x)‖+M) 2 2ζ
+ 1
2α
( ‖z − x?‖2 − ∥∥z+ − x?∥∥2) , After simplification,
1 1− β ( f(y+)− f(x?) ) ≤ β 1− β ( f(y)− f(x?) ) + L(1− β) + ζ − α−1 2 ∥∥z − z+∥∥2 + h(x?)− h(z+) + (‖∇f(x)−∇fi(x)‖+M) 2
2ζ +R1
+ 1
2α
( ‖z − x?‖2 − ∥∥z+ − x?∥∥2) . (11)
Note that with the convexity of h and y+ = (1− β) · z+ + β · y, we have h(y+) ≤ (1− β)h(z+) + βh(y),
h(z+) ≥ 1 1− β h(y+)− β 1− β h(y).
Using the above inequality and choosing ζ = α−1 − L(1 − β) > 0 ⇒ α(1 − β) < 1/L, we can arrange (11) as
1 1− β ( F (y+)− F (x?) ) ≤ β 1− β ( F (y)− F (x?) ) + 1 2α ( ‖z − x?‖2 − ∥∥z+ − x?∥∥2) + (‖∇f(x)−∇fi(x)‖+M)2
2(α−1 − L(1− β)) +R1.
B.2.2 PROOF OF THEOREM 1A
Using Assumption (c), Lemma 1 with x = xk z = zk z+ = zk+1 y = x̃s y+ = xk+1 α = αs β = βs , (12)
and taking expectation, if αs(1− βs) < 1/L, we have 1 1− βs ( Eik [F (xk+1)]− F (x?) ) + 1 2αs Eik [ ‖zk+1 − x?‖2 ] ≤ βs
1− βs ( F (x̃s)− F (x?) ) + 1 2αs ‖zk − x?‖2 +
(σ +M)2
2(α−1s − L(1− βs)) .
Summing the above inequality from k = sm, . . . , sm+m− 1, we obtain
1
(1− βs)m m∑ j=1 ( E [F (xsm+j)]− F (x?) ) + 1 2αsm E [∥∥z(s+1)m − x?∥∥2]
≤ βs 1− βs
( F (x̃s)− F (x?) ) + 1
2αsm ‖zsm − x?‖2 +
(σ +M)2
2(α−1s − L(1− βs)) ,
Using the definition of x̃s+1 and convexity,
αs 1− βs
( E [F (x̃s+1)]− F (x?) ) + 1 2m E [∥∥z(s+1)m − x?∥∥2]
≤ αsβs 1− βs
( F (x̃s)− F (x?) ) + 1
2m ‖zsm − x?‖2 +
αs(σ 2 +M2)
α−1s − L(1− βs) .
(13)
It can be verified that with the choices βs = ss+2 and αs = λ1 L(1−βs) , the following holds for s ≥ 0,
αs+1βs+1 1− βs+1 ≤ αs 1− βs and β0 = 0. (14)
Note that since our analysis aims at providing intuition, we do not refine the choice of αs as in (Hu et al., 2009; Ghadimi & Lan, 2012). Thus, by telescoping (13) from s = S − 1, . . . , 0, we obtain
αS−1 1− βS−1
( E [F (x̃S)]− F (x?) ) + 1 2m E [ ‖zSm − x?‖2 ] ≤ 1
2m ‖x0 − x?‖2 + S−1∑ s=0 αs(σ 2 +M2) α−1s − L(1− βs) ,
and thus,
E [F (x̃S)]− F (x?) ≤ 2L
λ1m(S + 1)2 ‖x0 − x?‖2 +
4L(σ2 +M2)
λ1(S + 1)2 S−1∑ s=0 α2s 1− αs(1− βs)L
(a) ≤ 2L λ1m(S + 1)2 ‖x0 − x?‖2 + 3λ1(σ 2 +M2) L(S + 1)2 S−1∑ s=0 (s+ 2)2
(b) ≤ 2L λ1m(S + 1)2 ‖x0 − x?‖2 + 8λ1(σ 2 +M2)(S + 1) L ,
where (a) follows from λ1 ≤ 23 and (b) holds because 0 ≤ x 7→ (x+2) 2 is non-decreasing and thus
S−1∑ s=0 (s+ 2)2 ≤ ∫ S 0 (x+ 2)2dx ≤ (S + 2) 3 3 ≤ 8(S + 1) 3 3 .
Denoting
λ?1 , L ‖x0 − x?‖
2 √ m √ σ2 +M2(S + 1) 3 2 ,
and based on the choice of λ1 = min { 2 3 , λ ∗ 1 } , if λ∗1 ≤ 23 , we have
E [F (x̃S)]− F (x?) ≤ 8 ‖x0 − x?‖
√ σ2 +M2
m 1 2 (S + 1) 1 2
.
If λ∗1 > 2 3 ,
E [F (x̃S)]− F (x?) ≤ 3L ‖x0 − x?‖2
m(S + 1)2 +
4 ‖x0 − x?‖ √ σ2 +M2
m 1 2 (S + 1) 1 2
.
Thus, we conclude that
E [F (x̃S)]− F (x?) ≤ 3L ‖x0 − x?‖2
m(S + 1)2 +
8 ‖x0 − x?‖ √ σ2 +M2
m 1 2 (S + 1) 1 2
.
Substituting S = K/m completes the proof.
B.2.3 PROOF OF THEOREM 1B
In order to prove Theorem 1b, we need the following known result for the martingale difference (cf. Lemma 2 in Lan et al. (2012)):
Lemma 2. With N > 0, let ξ0, ξ1, . . . , ξN−1 be a sequence of i.i.d. random variables, for t = 0, . . . , N − 1, σt > 0 be a deterministic number and ψt = ψt(ξ0, . . . , ξt) be a deterministic measurable function such that Eξt [ψt] = 0 a.s. and Eξt [ exp{ψ2t /σ2t } ] ≤ exp{1} a.s.. Then for any Λ ≥ 0,
Prob N−1∑ t=0 ψt ≥ Λ √√√√N−1∑ t=0 σ2t ≤ exp{−Λ2/3}. To start with, using Lemma 1 with the parameter mapping (12), we have
1 1− βs ( F (xk+1)− F (x?) ) + 1 2αs ‖zk+1 − x?‖2
≤ βs 1− βs
( F (x̃s)− F (x?) ) + 1
2αs ‖zk − x?‖2
+ (‖∇f(xk)−∇fik(xk)‖+M)2
2(α−1s − L(1− βs)) + 〈∇f(xk)−∇fik(xk), zk − x?〉
≤ βs 1− βs
( F (x̃s)− F (x?) ) + 1
2αs ‖zk − x?‖2 +
M2
α−1s − L(1− βs)
+ ‖∇f(xk)−∇fik(xk)‖
2
α−1s − L(1− βs) + 〈∇f(xk)−∇fik(xk), zk − x?〉 .
Summing the above inequality from k = sm, . . . , sm+m− 1 and using the choice αs = λ1L(1−βs) with λ1 ≤ 23 , we obtain
αs 1− βs
( F (x̃s+1)− F (x?) ) + 1
2m ∥∥z(s+1)m − x?∥∥2 ≤ αsβs
1− βs ( F (x̃s)− F (x?) ) + 1 2m ‖zsm − x?‖2 + 3α2sM2
+ 3α2s m sm+m−1∑ k=sm ‖∇f(xk)−∇fik(xk)‖ 2 + αs m sm+m−1∑ k=sm 〈∇f(xk)−∇fik(xk), zk − x?〉.
With our parameter choices, the relations in (14) hold and thus we can telescope the above inequality from s = S − 1, . . . , 0,
αS−1 1− βS−1
( F (x̃S)− F (x?) ) ≤ 1
2m ‖x0 − x?‖2 + 3M2 S−1∑ s=0 α2s
+ 3
m K−1∑ k=0 α2bk/mc ‖∇f(xk)−∇fik(xk)‖ 2
︸ ︷︷ ︸ R4
+ 1
m K−1∑ k=0
αbk/mc 〈∇f(xk)−∇fik(xk), zk − x?〉︸ ︷︷ ︸ R5 .
(15)
Denoting V2k , ‖∇f(xk)−∇fik(xk)‖ 2, ᾱ = ∑K−1 k=0 α 2 bk/mc = m ∑S−1 s=0 α 2 s, for R4, by Jensen’s inequality, we have
E [ exp { 1
ᾱ K−1∑ k=0 α2bk/mcV 2 k/σ 2 }] ≤ 1 ᾱ K−1∑ k=0 α2bk/mcE [ exp { V2k/σ2 }] (?) ≤ exp{1},
where (?) uses the additional assumption Eik [ exp { V2k/σ2 }] ≤ exp{1}.
Then, based on Markov’s inequality, we have for any Λ ≥ 0,
Prob { exp { 1
ᾱ K−1∑ k=0 α2bk/mcV 2 k/σ 2
} ≥ exp{Λ + 1} } ≤ exp{−Λ},
Prob { R4 ≥ (Λ + 1)σ2m
S−1∑ s=0 α2s
} ≤ exp{−Λ}. (16)
For R5, since we have Eik [ αbk/mc 〈∇f(xk)−∇fik(xk), zk − x?〉 ] = 0 and
Eik
[ exp { α2bk/mc 〈∇f(xk)−∇fik(xk), zk − x ?〉2
α2bk/mcσ 2D2X
}] ≤ Eik [ exp { V2k/σ2 }] ≤ exp{1},
which is based on the “light tail” assumption, using Lemma 2, we obtain
Prob R5 ≥ ΛσDX √√√√m S−1∑
s=0
α2s ≤ exp{−Λ2/3}. (17) Combining (15), (16) and (17), based on the parameter setting (cf. (5)) and using the notation
K0(m) , 3Lm ‖x0 − x?‖2
(K +m)2 +
8 ‖x0 − x?‖ √ σ2 +M2√
K +m ,
R6 , 12Lσ2
λ1(S + 1)2 S−1∑ s=0 α2s + 4LσDX λ1(S + 1)2 √ m √√√√S−1∑ s=0 α2s,
we conclude that
Prob {F (x̃S)− F (x?) ≤ K0(m) + ΛR6} ≥ 1− (exp{−Λ2/3}+ exp{−Λ}).
For R6, using the choice of αs and λ1, we obtain
R6 ≤ 4 √ 6σDX
3 √ K +m
+ 8λ1σ
2(S + 1) L ≤ 4 √ 6σDX 3 √ K +m + 4σ2 ‖x0 − x?‖√ K +m √ σ2 +M2
≤ 4σ ( 3 ‖x0 − x?‖+ √ 6DX ) 3 √ K +m ,
which completes the proof.
B.2.4 PROOF OF THEOREM 2
Using Assumption (c), Lemma 1 with x = xjkk z = zk z+ = zk+1 y = φkjk y+ = φk+1jk α = αk β = βk ,
and taking expectation, if αk(1− βk) < 1/L, we have 1
1− βk Eik,jk
[ F (φk+1jk )− F (x ?) ] + 1
2αk Eik,jk
[ ‖zk+1 − x?‖2 ] ≤ βk
1− βk Ejk
[ F (φkjk)− F (x ?) ] + 1
2αk ‖zk − x?‖2 +
(σ +M)2
2(α−1k − L(1− βk)) .
(18)
Note that
Eik,jk [ F (φk+1jk )− F (x ?) ]
= Eik,jk m∑ j=1 ( F (φk+1j )− F (x ?) ) − m∑ j 6=jk ( F (φkj )− F (x?) ) = Eik,jk m∑ j=1 ( F (φk+1j )− F (x ?) )− Ejk m∑ j 6=jk ( F (φkj )− F (x?)
) . Dividing both sides of (18) by m and then adding 1(1−βk)mEjk [∑m j 6=jk ( F (φkj )− F (x?) )] to both sides, we obtain
1
1− βk Eik,jk 1 m m∑ j=1 F (φk+1j )− F (x ?) + 1 2αkm Eik,jk [ ‖zk+1 − x?‖2 ]
≤ − 1 m Ejk [ F (φkjk)− F (x ?) ] + 1 1− βk 1 m m∑ j=1 F (φkj )− F (x?) + 1 2αkm ‖zk − x?‖2
+ (σ +M)2
2m(α−1k − L(1− βk))
= 1− 1−βkm 1− βk 1 m m∑ j=1 F (φkj )− F (x?) + 1 2αkm ‖zk − x?‖2 + (σ +M)2 2m(α−1k − L(1− βk)) . (19)
It can be verified that with our parameters choice: βk = k/m k/m+2 and αk = λ2 L(1−βk) , the following holds for k ≥ 0,
αk+1 1− 1−βk+1m 1− βk+1 ≤ αk 1− βk and β0 = 0.
Note that since our analysis aims at providing intuition, we do not refine the choice of αs as in (Hu et al., 2009; Ghadimi & Lan, 2012). Then, we can telescope (19) from k = K − 1, . . . , 0, which results in
αK−1 1− βK−1 E 1 m m∑ j=1 F (φKj )− F (x?) + 1 2m E [ ‖zK − x?‖2 ]
≤ λ2(m− 1) Lm
( F (x0)− F (x?) ) + 1
2m ‖x0 − x?‖2 + K−1∑ k=0
αk(σ +M) 2
2m(α−1k − L(1− βk)) .
Using the definition of φ̄K and convexity, we obtain
E [ F (φ̄K)− F (x?) ] ≤ 1− βK−1
αK−1
( λ2(m− 1)
Lm
( F (x0)− F (x?) ) + 1
2m ‖x0 − x?‖2 ) +
1− βK−1 αK−1 K−1∑ k=0
αk(σ +M) 2
2m(α−1k − L(1− βk))
(a) =
4(m− 1) ( F (x0)− F (x?) ) m ( K−1 m + 2 )2 + 2L ‖x0 − x?‖2 λ2m ( K−1 m + 2
)2 + 3λ2(σ +M) 2
2Lm ( K−1 m + 2 )2 K−1∑ k=0 ( k m + 2 )2 (b)
≤ 4(m− 1)
( F (x0)− F (x?) ) m ( K−1 m + 2 )2 + 2L ‖x0 − x?‖2 λ2m ( K−1 m + 2
)2 (20) + 4λ2(σ +M) 2 ( K−1 m + 2 ) L ,
where (a) uses λ2 ≤ 23 , (b) follows from simple integration arguments and that K m + 2 ≤ 2 ( K−1 m + 2 ) since K ≥ 1,m ≥ 1.
Based on the choice of
λ2 = min 23 , L ‖x0 − x?‖√2m(σ +M) (K−1m + 2) 32 ,
(20) can be further upper bounded as E [ F (φ̄K)− F (x?) ] ≤ 4(m− 1) ( F (x0)− F (x?) ) m ( K−1 m + 2 )2 + 3L ‖x0 − x?‖2 m ( K−1 m + 2 )2 +4√2 ‖x0 − x?‖ (σ +M) m 1 2 ( K−1 m + 2 ) 1 2 .
B.3 CONNECTIONS BETWEEN AM1-SGD AND KATYUSHA
The discussion in this section aims to shed light on the understanding of the experimental results, which also shows some interesting relations between AM1-SGD and Katyusha.
The high level idea of Katyusha momentum is that it works as a “magnet” inside an epoch of SVRG updates, which “stabilizes” the iterates so as to make Nesterov’s momentum effective (Allen-Zhu, 2018). In theory, the key effect of Katyusha momentum is that it allows the tightest possible variance bound for the stochastic gradient estimator of SVRG (cf. Lemma 2.4 and its comments in AllenZhu (2018)). In this sense, we can interpret Katyusha momentum as a variance reducer that further reduces the variance of SVRG. Below we show the similarity between the construction of Katyusha and AM1-SGD, based on which we conjecture that the amortized momentum can also reduce the variance of SGD (and thus increase the robustness). However, in theory, following a similar analysis of Katyusha, we cannot guarantee a reduction of σ in the worst case.
Deriving AM1-SGD from Katyusha Katyusha has the following scheme (non-proximal, in the original notations, σ is the strong convexity parameter, cf. Algorithm 1 with Option I in Allen-Zhu (2018))12:
Initialize: x̃0 = y0 = z0 = x0, η = 13L , ω = 1 + ασ. 1: for s = 0, . . . , S − 1 do 2: Compute and store∇f(x̃s). 3: for j = 0, . . . ,m− 1 do 4: k = sm+ j. 5: xk = τ1 · zk + τ2 · x̃s + (1− τ1 − τ2) · yk. 6: ∇̃k = ∇fik(xk)−∇fik(x̃s) | 1. What is the main contribution of the paper, and how does it improve upon existing methods?
2. How convincing are the motivation and experimental results of the paper, especially in the context of deep learning?
3. Can the authors provide practical evidence to support their claim that Amortized Nesterov's Momentum is more robust and has faster convergence in the early stage without losing generalization performance?
4. How does the proposed approach compare to other baselines in terms of computational efficiency and memory usage?
5. How does the theoretical analysis of the paper relate to deep learning applications?
6. Are there any inconsistencies or issues with the experiment section, such as the choice of hyperparameters and the comparison between different methods?
7. Is there enough background information provided in the paper regarding Katyusha momentum? | Review | Review
The authors proposed Amortized Nesterov’s Momentum, a variant of Nesterov’s momentum that utilizes several past iterates, instead of one iterate, to provide the momentum. The goal is to have more robust iterates, faster convergence in the early stage and higher efficiency. The authors designed two different realizations, AM1-SGD and AM2-SGD.
Comments:
My major concern for this paper is its unconvincing motivation and experiment results, especially when the approach is designed for deep learning.
The motivation for the proposed approach is not quite convincing. The authors said that “due to the large stochasticity, SGD with Nesterov’s momentum is not robust...This increased uncertainty slows down its convergence especially in the early stage of training and makes its single run less trustworthy” For image classification, Nesterov momentum is very popular and the final convergence values of different trails seems to be similar. It would be more convincing if the authors can provide practical evidence for supporting this claim.
It was claimed that Amortized Nesterov’s Momentum has “higher efficiency and faster convergence in the early stage without losing the good generalization performance”. What is the benefit or advantage for having faster early convergence without improving the final generalization performance?
The authors claim that “M-SGD2 is more robust and achieves slightly better performance”, in Figure 1a, however, it is really hard to tell the difference between M-SGD2 and M-SGD from Figure 1a.
The efficiency improvement in page 4 is really hard to follow for comparison with Algorithm 1 in page 3. Though m > 2 could reduce the number of operations in step 5 and 6, I don’t think this is a computational bottleneck. I believe these updates should be very fast in comparison with forward and gradient calculation. Making the 1% computation 50% faster does not mean more efficient. I would like to know how much computation cost can be saved with this modification. On the other hand, adding one more auxiliary buffer (scaled momentum) could significantly impact the training as the memory is often the limit.
In section 3.1, what is “identical iteration”? It is hard to compare AM2-SGD with AM1-SGD. It would be easier to follow if the side-by-side algorithm comparison can be shown early.
The section 4’s theoretical analysis based on the convex composite problem is not quite convincing. I am not sure how these results are related with the deep learning applications.
In the experiment section, the comparison of AM1/2-SGD with other baselines seems not quite consistent. The authors first state that they use all 0.1 learning rate and 0.9 momentum for all methods, however, the setting for M-SGD is using 0.99 momentum and different learning rate schedule. This makes the comparison not very meaningful, while AM1-SGD and AM2-SGD do not use learning rate restart. With so many differences, the advantage of AM1-SGD and AM2-SGD are not that different with M-SGD. In the task of ImageNet-152, M-SGD even is better than AM1-SGD. This makes the conclusion that “AM1-SGD has a lightweight design, which can serve as an excellent replacement for M-SGD in large-scale deep learning tasks” not quite valid.
Minor: The author may assume readers maybe familiar Katyusha momentum, I think there may need more background about it. |
ICLR | Title
Amortized Nesterov's Momentum: Robust and Lightweight Momentum for Deep Learning
Abstract
Stochastic Gradient Descent (SGD) with Nesterov’s momentum is a widely used optimizer in deep learning, which is observed to have excellent generalization performance. In this work, we propose Amortized Nesterov’s Momentum, which is a special variant of Nesterov’s momentum. Compared with Nesterov’s momentum, our new momentum has more robust iterates and higher efficiency. Our empirical results show that it achieves faster early convergence and comparable final generalization performance with little-to-no tuning. Just like Nesterov’s method, the new schemes are also proved optimal in general convex setting. Our analysis sheds light on the understanding of the new variant.
1 INTRODUCTION
In recent years, Gradient Descent (GD) (Cauchy, 1847) and its variants have been widely used to solve large scale machine learning problems. Among them, Stochastic Gradient Descent (SGD) (Robbins & Monro, 1951), which replaces gradient with an unbiased stochastic gradient estimator, is a popular choice of optimizer especially for neural network training which requires lower precision. Sutskever et al. (2013) found that using SGD with Nesterov’s momentum (Nesterov, 1983; 2013b), which was originally designed to accelerate deterministic convex optimization, achieves substantial speedups for training neural networks. This finding essentially turns SGD with Nesterov’s momentum into the benchmarking method of neural network design, especially for classification tasks (He et al., 2016b;a; Zagoruyko & Komodakis, 2016; Huang et al., 2017). It is observed that in these tasks, the momentum technique plays a key role in achieving good generalization performance.
Adaptive methods (Duchi et al., 2011; Kingma & Ba, 2015; Tieleman & Hinton, 2012; Reddi et al., 2018), which are also becoming increasingly popular in the deep learning community, diagonally scale the gradient to speed up training. However, Wilson et al. (2017) show that these methods always generalize poorly compared with SGD with momentum (both classical momentum (Polyak, 1964) and Nesterov’s momentum).
In this work, we introduce Amortized Nesterov’s Momentum, which is a special variant of Nesterov’s momentum. From users’ perspective, the new momentum has only one additional integer hyper-parameter m to choose, which we call the amortization length. Learning rate and momentum parameter of this variant are strictly aligned with Nesterov’s momentum and by choosing m = 1, it recovers Nesterov’s momentum. This paper conducts an extensive study based on both empirical evaluation and convex analysis to identify the benefits of the new variant (or from users’ angle, to set m apart from 1). We list the advantages of Amortized Nesterov’s Momentum as follows:
• Increasing m improves robustness1. This is an interesting property since the new momentum not only provides acceleration, but also enhances the robustness. We provide an understanding of this property by analyzing the relation between convergence rate andm in the convex setting. • Increasing m reduces (amortized) iteration complexity. • A suitably chosen m boosts the convergence rate in the early stage of training and produces
comparable final generalization performance.
1In this work, robustness refers to the probability of an optimizer significantly deviating from its expected performance, which can be reflected by the deviations of accuracy or loss in the training process over multiple runs that start with the same initial guess.
• It is easy to tune m. The performances of the methods are stable for a wide range of m and we prove that the methods converge for any valid choice of m in the convex setting. • Ifm is not too large, the methods obtain the optimal convergence rate in general convex setting,
just like Nesterov’s method.
The new variant does have some minor drawbacks: it requires one more memory buffer, which is acceptable in most cases, and it shows some undesired behaviors when working with learning rate schedulers, which can be addressed by a small modification. Considering these pros and cons, we believe that the proposed variant can benefit many large-scale deep learning tasks.
Our high level idea is simple: the stochastic Nesterov’s momentum can be unreliable since it is provided only by the previous stochastic iterate. The iterate potentially has large variance, which may lead to a false momentum that perturbs the training process. We thus propose to use the stochastic Nesterov’s momentum based on several past iterates, which provides robust acceleration. In other words, instead of immediately using an iterate to provide momentum, we put the iterate into an “amortization plan” and use it later.
2 PRELIMINARIES: SGD AND NESTEROV’S MOMENTUM
We start with a review of SGD and Nesterov’s momentum. We discuss some subtleties in the implementation and evaluation, which contributes to the interpretation of our methods.
Notations In this paper, we use x ∈ Rd to denote the vector of model parameters. ‖·‖ and 〈·, ·〉 denote the standard Euclidean norm and inner product, respectively. Scalar multiplication for v ∈ Rd and β ∈ R is denoted as β ·v. f : Rd → R denotes the loss function to be minimized and∇f(x) represents the gradient of f evaluated at x. We denote the unbiased stochastic gradient estimator of ∇f(x) as ∇fi(x) with the random variable i independent of x (e.g., using mini-batch). We use x0 ∈ Rd to denote the initial guess.
SGD SGD has the following simple iterative scheme, where γ ∈ R denotes the learning rate:
xk+1 = xk − γ · ∇fik(xk), for k ≥ 0.
Nesterov’s momentum The original Nesterov’s accelerated gradient (with constant step) (Nesterov, 1983; 2013b) has the following scheme2 (y ∈ Rd, η, β ∈ R and y0 = x0):
yk+1 = xk − η · ∇f(xk), xk+1 = yk+1 + β · (yk+1 − yk), for k ≥ 0,
(1)
where we call β · (yk+1 − yk) the momentum. By simply replacing ∇f(xk) with ∇fik(xk), we obtain the SGD with Nesterov’s momentum, which is widely used in deep learning. To make this point clear, recall that the reformulation in Sutskever et al. (2013) (scheme (2), also the Tensorflow (Abadi et al., 2016) version) and the PyTorch (Paszke et al., 2017) version (scheme (3)) have the following schemes (v, vpt ∈ Rd and v0 = vpt0 = 0): for k ≥ 0,
(2) { vk+1 = β · vk − η · ∇fik(yk + β · vk), yk+1 = yk + vk+1.
(3) { vptk+1 = β · v pt k +∇fik(xk),
xk+1 = xk − η · (β · vptk+1 +∇fik(xk)).
Here the notations are modified based on their equivalence to scheme (1). It can be verified that schemes (2) and (3) are equivalent to (1) through vk = β−1 ·(xk−yk) and vptk = η−1β−1 ·(yk−xk), respectively (see Defazio (2018) for other equivalent forms of scheme (1)).
Interestingly, both PyTorch and Tensorflow3 track the values {xk}, which we refer to as M-SGD. This choice allows a consistent implementation when wrapped in a generic optimization layer (Defazio, 2018). However, the accelerated convergence rate (in the convex case) is built upon {yk} (Nesterov, 2013b) and {xk} may not possess such a theoretical improvement. We use OM-SGD to refer to the Original M-SGD that outputs {yk}.
2We exchange the notations of x and y in Nesterov (2013b). 3Tensorflow tracks the values {yk + β · vk} = {xk}.
SGD and M-SGD In order to study the features of momentum, in this work, we regard momentum as an add-on to plain SGD, which corresponds to fixing the learning rates4 γ = η. From the interpretation in Allen-Zhu & Orecchia (2017), η represents the learning rate for the gradient descent “inside” Nesterov’s method. To introduce the evaluation metrics of this paper, we report the results of training ResNet34 (He et al., 2016b) on CIFAR-10 (Krizhevsky et al., 2009) (our basic case study) using SGD and M-SGD in Figure 1. In this paper, all the multiple runs start with the same initial guess x0. Figure 1a shows that Nesterov’s momentum hurts the convergence in the first 60 epochs but accelerates the final convergence, which verifies the importance of momentum for achieving high accuracy. Figure 1b depicts the robustness of M-SGD and SGD, which suggests that adding Nesterov’s momentum slightly increases the uncertainty in the training process of SGD.
Train-batch loss vs. Full-batch loss In Figure 1c, train-batch loss stands for the average of batch losses forwarded in an epoch, which is commonly used to indicate the training process in deep learning. Full-batch loss is the average loss over the entire training dataset evaluated at the end of each epoch. In terms of optimizer evaluation, full-batch loss is much more informative than trainbatch loss as it reveals the robustness of an optimizer. However, full-batch loss is too expensive to evaluate and thus we only measure it on small datasets. On the other hand, test accuracy couples optimization and generalization, but since it is also evaluated at the end of the epoch, its convergence is similar to full-batch loss. Considering the basic usage of momentum in deep learning, we mainly use test accuracy to evaluate optimizers. We provide more discussion on this issue in Appendix C.2.
M-SGD vs. OM-SGD We also include OM-SGD in Figure 1a. In comparison, the final accuracies of M-SGD and OM-SGD are 94.606%± 0.152% and 94.728%± 0.111% with average deviations at 1.040% and 0.634%, respectively. This difference can be explained following the interpretation in Hinton (2012) that {xk} are the points after “jump” and {yk} are the points after “correction”.
3 AMORTIZED NESTEROV’S MOMENTUM
In this section, we formally introduce SGD with Amortized Nesterov’s Momentum (AM1-SGD) in Algorithm 1 with the following remarks:
Options It can be verified that if m = 1, AM1-SGD with Option I degenerates to M-SGD and Option II corresponds to OM-SGD. Just like the case for M-SGD and OM-SGD, the accelerated convergence rate is built upon Option II while Option I is easier to be implemented in a generic optimization layer5. Intuitively, Option I is SGD with amortized momentum and Option II applies an m-iterations tail averaging on Option I.
4Ma & Yarats (2019) observed that when effective learning rates γ = η(1 − β)−1 are fixed, M-SGD and SGD have similar performance. We provide a discussion on this observation in Appendix C.1.
5To implement Option II, we can either maintain another identical network for the shifted point x̃ or temporarily change the network parameters in the evaluation phase.
Algorithm 1 AM1-SGD Input: Initial guess x0, learning rate η, momentum β, amortization length m, iteration number K. Initialize: x← x0, x̃← x0, x̃+ ← 0 {a running average}.
1: for k = 0, . . . ,K − 1 do 2: x← x− η · ∇fik(x). 3: x̃+ ← x̃+ + 1m · x. 4: if (k + 1) mod m = 0 then 5: x← x+ β · (x̃+ − x̃). {adding amortized momentum} 6: x̃← x̃+, x̃+ ← 0. 7: end if 8: end for
Output: Option I: x, Option II: x̃. * The symbol ‘←’ denotes assignment.
Efficiency We can improve the efficiency of Algorithm 1 by maintaining a running scaled momentum ṽ+ , m · (x̃+ − x̃) instead of the running average x̃+, by replacing the following steps in Algorithm 1:
Initialize: x← x0, x̃← x0, ṽ+ ← −m · x0, Step 3: ṽ+ ← ṽ+ + x. Step 5: x← x+ (β/m) · ṽ+. Step 6: x̃← x̃+ (1/m) · ṽ+, ṽ+ ← −m · x̃.
Then, in one m-iterations loop, for each of the first m − 1 iterations, AM1-SGD requires 1 vector addition and 1 scaled vector addition. At the m-th iteration, it requires 1 vector addition, 1 scalarvector multiplication and 3 scaled vector additions. In comparison, M-SGD (standard PyTorch) requires 1 vector addition, 1 (in-place) scalar-vector multiplication and 2 scaled vector additions per iteration. Thus, as long as m > 2, AM1-SGD has lower amortized cost than M-SGD. For memory complexity, AM1-SGD requires one more auxiliary buffer than M-SGD.
Tuning m We did a parameter sweep for m in our basic case study. We plot the final and the average deviation of test accuracies over 5 runs againstm in Figure 2a. Note that m=1 corresponds to the results of M-SGD and OM-SGD, which are already given in Figure 1. From this empirical result, m introduces a trade-off between final accuracy and robustness (the convergence behaviors can be found in Appendix A.1). Figure 2a suggests that m= 5 is a good choice for this task. For simplicity, and also as a recommended setting, we fix m=5 for the rest of experiments in this paper.
A momentum that increases robustness To provide a stronger justification, we ran 20 seeds with m = 5 in Figure 2b and the detailed data are given in Figure 3 & Table 1. The results show that the amortized momentum significantly increases the robustness. Intuitively, the gap between Option I and Option II can be understood as the effect of tail averaging. However, the large gap between Option I and SGD is somewhat mysterious: what Option I does is to inject a very large momentum6 into SGD every m iterations. It turns out that this momentum not only provides acceleration, but also helps the algorithm become more robust than SGD. This observation basically differentiates AM1-SGD from a simple interpolation in-between M-SGD and SGD.
6Amortized momentum β ·(x̃+−x̃) is expected to be much large than Nesterov’s momentum β ·(yk+1−yk).
Learning rate scheduler issue We observed that when we use schedulers with a large decay factor and the momentum β is too large for the task (e.g., 0.995 for the task of this section), there would be a performance drop after the learning rate reduction. We believe that it is caused by the different cardinalities of iterates being averaged in x̃+, which leads to a false momentum. This issue is resolved by restarting the algorithm after each learning rate reduction inspired by (O’donoghue & Candes, 2015). We include more discussion and evidence in Appendix A.4.
3.1 AM2-SGD: A VARIANT WITH IDENTICAL ITERATIONS
Algorithm 2 AM2-SGD
Input: Initial guess x0, amortization lengthm, a point table φ = [φ1 · · · φm] ∈ Rd×m, learning rate η, momentum β, iteration number K. Initialize: φ0j = x0,∀j ∈ [m]*. {jk | jk ∈ [m]} K−1 k=0 is a sequence of uniformly random indexes.
If Option II is used, φ̄0 = x0. {a running average for the point table φ} 1: for k = 0, . . . ,K − 1 do 2: φk+1jk = xk − η · ∇fik(xk) and keep other entries unchanged (i.e., φ k+1 j = φ k j for j 6= jk). 3: xk+1 = φ k+1 jk + β · (φk+1jk+1 − φ k jk
). {adding amortized momentum} 4: if Option II then φ̄k+1 = φ̄k + 1m · ( φk+1jk − φ k jk ) .
5: end for Output: Option I (not recommended): xK , Option II: φ̄K . * [m] denotes the set {1, . . . ,m}.
While enjoying an improved efficiency, AM1-SGD does not have identical iterations7, which to some extent limits its extensibility to other settings (e.g., asynchronous setting). In this section, we propose a variant of Amortized Nesterov’s Momentum (AM2-SGD, Algorithm 2) to address this problem. To show the characteristics of AM2-SGD, we make the following remarks:
Trading memory for extensibility In expectation, the point table φ stores the most recent m iterations and thus the output φ̄K is an m-iterations tail average, which connects to AM1-SGD. The relation between AM1-SGD and AM2-SGD resembles that of SVRG (Johnson & Zhang, 2013) and SAGA (Defazio et al., 2014), the most popular methods in finite-sum convex optimization: to reuse the information from several past iterates, we can either maintain a “snapshot” that aggregates the information or keep the iterates in a table. A side-by-side comparison is given in Section 4.
Options and convergence As in the case of AM1-SGD, if m = 1, AM2-SGD with Option I corresponds to M-SGD and Option II is OM-SGD. In our preliminary experiments, the convergence of AM2-SGD is similar to AM1-SGD and it also has the learning rate scheduler issue. In our preliminary experiments (can be found in Appendix A), we observed that Option I is consistently worse than Option II and it does not seem to benefit from increasingm. Thus, we do not recommend using Option I. We also set m = 5 for AM2-SGD for its evaluation due to the similarity.
7For AM1-SGD, the workload varies for different iteration k due to the if-clause at Step 4.
Additional randomness {jk} In our implementation, at each iteration, we sample an index in [m] as jk+1 and obtain the stored index jk. We observed that with Option I, AM2-SGD has much larger deviations than AM1-SGD, which we believe is caused by the additional random indexes {jk}.
4 CONVERGENCE RESULTS
The original Nesterov’s accelerated gradient is famous for its optimal convergence rates for solving convex problems. In this section, we analyze the convergence rates for AM1-SGD and AM2-SGD in the convex case, which explicitly model the effect of amortization (i.e., m). While these rates do not hold for deep learning problems in general, they help us understand the observed convergence behaviors of the proposed methods, especially on how they differ from M-SGD (m = 1). Moreover, the analysis also provides intuition on tuning m. Since the original Nesterov’s method is deterministic (Nesterov, 1983; 2013b), we follow the setting of its stochastic variants (Lan, 2012; Ghadimi & Lan, 2012), in which Nesterov’s acceleration also achieves the optimal rates.
We consider the following convex composite problem (Beck & Teboulle, 2009; Nesterov, 2013a):
min x∈X
{ F (x) , f(x) + h(x) } , (4)
whereX ⊆ Rd is a non-empty closed convex set and h is a proper convex function with its proximal operator proxαh(·)8 available. We impose the following assumptions on the regularity of f and the stochastic oracle∇fi (identical to the ones in Ghadimi & Lan (2012) with µ = 0): Assumptions. For some L ≥ 0,M ≥ 0, σ ≥ 0,
(a) 0 ≤ f(y)− f(x)− 〈∇f(x), y − x〉 ≤ L2 ‖y − x‖ 2 +M ‖y − x‖ ,∀x, y ∈ X.9
(b) Ei [∇fi(x)] = ∇f(x),∀x ∈ X. (c) Ei [ ‖∇fi(x)−∇f(x)‖2 ] ≤ σ2,∀x ∈ X.
The notation Eik [ · ] is E [ · | (i0, . . . , ik−1)] for a random process i0, i1, . . .. These assumptions cover several important classes of convex problems. For example, (a) covers the cases of f being L-smooth (M = 0) or L0-Lipschitz continuous (M = 2L0, L = 0) convex functions and if σ = 0 in (c), the assumptions cover several classes of deterministic convex programming problems. We denote x? ∈ X as a solution to problem (4) and x0 ∈ X as the initial guess. Unlike its usage in deep learning, the momentum parameter β is always a variable in general convex analysis. For the simplicity of analysis, we reformulate AM1-SGD (Algorithm 1) and AM2-SGD (Algorithm 2) into the following schemes10(z ∈ X,α ∈ R):
AM1-SGD (reformulated, proximal)
Initialize: x̃0 = z0 = x0, S = K/m. 1: for s = 0, . . . , S − 1 do 2: for j = 0, . . . ,m− 1 do 3: k = sm+ j. 4: xk = (1− βs) · zk + βs · x̃s. 5: zk+1 = proxαsh {zk − αs ·∇fik(xk)}. 6: (xk+1 = (1− βs) · zk+1 + βs · x̃s.) 7: end for 8: x̃s+1 = 1 m ∑m j=1 xsm+j .
9: end for Output: x̃S .
AM2-SGD (reformulated, proximal)
Initialize: z0 = φ0j = x0,∀j ∈ [m]. 1: for k = 0, . . . ,K − 1 do 2: Sample jk uniformly in [m]. 3: xjkk = (1− βk) · zk + βk · φkjk . 4: zk+1 = proxαkh {zk − αk ·∇fik(x jk k )}.
5: φk+1jk = (1− βk) · zk+1 + βk · φ k jk . 6: end for
Output: φ̄K = 1m ∑m j=1 φ K j .
We show in Appendix B.1 that when h ≡ 0 and β is a constant, the reformulated schemes AM1SGD and AM2-SGD are equivalent to Algorithm 1 and Algorithm 2 through αs = η(1−βs)−1 and
8∀x ∈ Rd, proxαh(x) , argminu∈X { 1 2 ‖u− x‖2 + αh(u) } , see Parikh et al. (2014).
9When M > 0, f is not necessarily differentiable and we keep using the notation ∇f(x) to denote an arbitrary subgradient of f at x for consistency.
10For simplicity, we assume K is divisible by m.
αk = η(1 − βk)−1. These reformulations are basically how Nesterov’s momentum was migrated into deep learning (Sutskever et al., 2013). Then we establish the convergence rates for AM1-SGD and AM2-SGD as follows. All the proofs in this paper are given in Appendix B.2. Theorem 1. For the reformulated AM1-SGD, suppose we choose
βs = s
s+ 2 and αs = λ1 L(1− βs)
with λ1 = min { 2
3 , L ‖x0 − x?‖ 2 √ m √ σ2 +M2(S + 1) 3 2
} . (5)
Then,
(a) The output x̃S satisfies
E [F (x̃S)]− F (x?) ≤ 3Lm ‖x0 − x?‖2
(K +m)2 +
8 ‖x0 − x?‖ √ σ2 +M2√
K +m , K0(m).
(b) If the variance has a “light tail”, i.e., Ei [ exp { ‖∇fi(x)−∇f(x)‖2/σ2 }] ≤exp{1},∀x ∈
X , and X is compact, denoting DX , maxx∈X ‖x− x?‖, for any Λ ≥ 0, we have
Prob { F (x̃S)− F (x?) ≤ K0(m) + 4Λσ ( 3 ‖x0 − x?‖+ √ 6DX ) 3 √ K +m } ≥ 1− ( exp{−Λ2/3}+ exp{−Λ} ) .
Remarks: (a) Regarding K0(m), its minimum is obtained at either m = 1 or m = K. Note that for AM1-SGD,m is strictly constrained in {1, . . . ,K}. It can be verified that whenm = K, AM1-SGD becomes the modified mirror descent SA (Lan, 2012), or under the Euclidean setting, the SGD that outputs the average of the whole history, which is rarely used in practice. In this case, the convergence rate in Theorem 1a becomes the corresponding O(L/K + (σ +M)/ √ K) (cf. Theorem 1 in Lan (2012)). Thus, we can regard AM1-SGD as a smooth transition between AC-SA and the modified mirror descent SA. (b) The additional compactness and “light tail” assumptions are similarly required in Nemirovski et al. (2009); Lan (2012); Ghadimi & Lan (2012). Recently, Juditsky et al. (2019) established similar bounds under weaker assumptions by truncating the gradient. However, as indicated by the authors, their technique cannot be used for accelerated algorithms due to the accumulation of bias.
Understandings: Theorem 1a gives the expected performance in terms of full-batch loss F (x̃) − F (x?), from which the trade-off of m is clear: Increasing m improves the dependence on variance σ but deteriorates the O(L/K2) term (i.e., the acceleration). Based on this trade-off, we can understand the empirical results in Figure 2b: the faster convergence in the early stage could be the result of a better control on σ and the slightly lowered final accuracy is possibly caused by the reduced acceleration effect. Theorem 1b provides the probability of the full-batch loss deviating from its expected performance (i.e., K0(m)). It is clear that increasing m leads to smaller deviations with the same probability, which sheds light on the understanding of the increased robustness observed in Figure 2. Since the theorem is built on the full-batch loss, we did an experiments based on this
metric in Figure 4 & Table 2. Here we choose training a smaller ResNet18 with pre-activation (He et al., 2016a) on CIFAR-10 as the case study (the test accuracy is reported in Appendix A.5).
For AM2-SGD, we only give the expected convergence results as follows. Theorem 2. For the reformulated AM2-SGD, if we choose
βk = k/m
k/m+ 2 and αk = λ2 L(1− βk) with λ2 = min 23 , L ‖x0 − x?‖√2m(σ +M) (K−1m + 2) 32 ,
the output φ̄K satisfies E [ F (φ̄K) ] − F (x?)≤ 4(m2−m) ( F (x0)−F (x?) ) +3Lm ‖x0−x?‖2
(K + 2m− 1)2 +
4 √
2 ‖x0−x?‖ (σ+M)√ K + 2m− 1 .
Remark: In comparison with Theorem 1a, Theorem 2 has an additional term F (x0)− F (x?) in the upper bound, which is inevitable. This difference comes from different restrictions on the choice of m. For AM2-SGD,m ≥ 1 is the only requirement. Since it is impossible to letm K to obtain an improved rate, this additional term is inevitable. As a sanity check, we can let m → ∞ to obtain a point table with almost all x0, and then the upper bound becomes exactly F (x0)− F (x?). In some cases, there exists an optimal choice of m > 1 in Theorem 2. However, the optimal choice could be messy and thus we omit the discussion here.
Understanding: Comparing the rates, we see that when using the same m, AM2-SGD has slightly better dependence on σ, which is related to the observation in Figure 5 that AM2-SGD is always slightly faster than AM1-SGD. This difference is suggesting that randomly incorporating past iterates beyond m iterations helps. If m = O(1), Theorems 1 and 2 establish the optimal O(L/K2 + (σ + M)/ √ K) rate in the convex setting (see Lan (2012) for optimality), which verifies AM1-SGD and AM2-SGD as variants of the Nesterov’s method (Nesterov, 1983; 2013b). From the above analysis, the effect of m can be understood as trading acceleration for variance control. However, since both acceleration and variance control boost the convergence speed, the reduced final performance observed in the CIFAR experiments may not always be the case as will be shown in Figure 5 and Table 3.
Connections with Katyusha Our original inspiration of AM1-SGD comes from the construction of Katyusha (Allen-Zhu, 2018), the recent breakthrough in finite-sum convex optimization, which uses a previously calculated “snapshot” point to provide momentum, i.e., Katyusha momentum. AM1-SGD also uses an aggregated point to provide momentum and it shares many structural similarities with Katyusha. We refer the interested readers to Appendix B.3.
5 PERFORMANCE EVALUATION
In this section, we evaluate AM1-SGD and AM2-SGD on more deep learning tasks. Our goal is to show their potentials of serving as alternatives for M-SGD. Regarding the options: for AM1-SGD, Option I is a nice choice, which has slightly better final performance as shown in Table 1; for AM2SGD, Option I is not recommended as mentioned before. Here we choose to evaluate Option II for both methods for consistency, which also corresponds to the analysis in Section 4. AM1-SGD and AM2-SGD use exactly the same values for (η, β) as M-SGD, which was tuned to optimize the performance of M-SGD. We set m = 5 for AM1-SGD and AM2-SGD.
We trained ResNet50 and ResNet152 (He et al., 2016b) on the ILSVRC2012 dataset (“ImageNet”) (Russakovsky et al., 2015) shown in Figure 5b. For this task, we used 0.1 initial learning rate and 0.9 momentum for all methods, which is a typical choice. We performed a restart after each learning rate reduction as discussed in Appendix A.4. We believe that this helps the training process and also does not incur any additional overhead. We report the final accuracy in Table 3.
We also did a language model experiment on Penn Treebank dataset (Marcus et al., 1993). We used the LSTM (Hochreiter & Schmidhuber, 1997) model defined in Merity et al. (2017) and followed the experimental setup in its released code. We only changed the learning rate and momentum in
the setup. The baseline is SGD+ASGD11 (Polyak & Juditsky, 1992) with constant learning rate 30 as used in Merity et al. (2017). For the choice of (η, β), following Lucas et al. (2019), we chose β = 0.99 and used the scheduler that reduces the learning rate by half when the validation loss has not decreased for 15 epochs. We swept η from {5, 2.5, 1, 0.1, 0.01} and found that η = 2.5 resulted in the lowest validation perplexity for M-SGD. We thus ran AM1-SGD and AM2-SGD with this (η, β) and m = 5. Due to the small decay factor, we did not restart AM1-SGD and AM2-SGD after learning rate reductions. The validation perplexity curve is plotted in Figure 5a. We report validation perplexity and test perplexity in Table 3. This experiment is directly comparable with the one in Lucas et al. (2019).
Extra results are provided in the appendices for interested readers: the robustness when using large β (Appendix A.2), a CIFAR-100 experiment (Appendix A.6) and comparison with classical momentum (Polyak, 1964), AggMo (Lucas et al., 2019) and QHM (Ma & Yarats, 2019) (Appendix A.3).
6 CONCLUSIONS
We presented Amortized Nesterov’s Momentum, which is a special variant of Nesterov’s momentum that utilizes several past iterates to provide the momentum. Based on this idea, we designed two different realizations, namely, AM1-SGD and AM2-SGD. Both of them are simple to implement with little-to-no additional tuning overhead over M-SGD. Our empirical results demonstrate that switching to AM1-SGD and AM2-SGD produces faster early convergence and comparable final generalization performance. AM1-SGD is lightweight and has more robust iterates than M-SGD, and thus can serve as a favorable alternative to M-SGD in large-scale deep learning tasks. AM2-SGD could be favorable for more restrictive tasks (e.g., asynchronous training) due to its extensibility and good performance. Both the methods are proved optimal in the convex case, just like M-SGD. Based on the intuition from convex analysis, the proposed methods are trading acceleration for variance control, which provides hints for the hyper-parameter tuning.
11SGD+ASGD is to run SGD and switch to averaged SGD (ASGD) when a threshold is met.
Appendices
A Extra Experimental Results 14
A.1 The effect of m on convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
A.2 Robustness on large momentum parameters . . . . . . . . . . . . . . . . . . . . . 15
A.3 Comparison with other momentum . . . . . . . . . . . . . . . . . . . . . . . . . . 15
A.4 Issues with learning rate schedulers . . . . . . . . . . . . . . . . . . . . . . . . . 17
A.5 Test accuracy results of Figure 4 & Table 2 . . . . . . . . . . . . . . . . . . . . . 17
A.6 CIFAR-100 experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
A.7 A sanity check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
B Missing parts in Section 4 19
B.1 The reformulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
B.2 Proofs of Theorem 1 and Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . 20
B.2.1 Proof of Lemma 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
B.2.2 Proof of Theorem 1a . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
B.2.3 Proof of Theorem 1b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
B.2.4 Proof of Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
B.3 Connections between AM1-SGD and Katyusha . . . . . . . . . . . . . . . . . . . 27
C Miscellanies 28
C.1 Comparison of SGD and M-SGD . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
C.2 Training evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
D Experimental Setup 29
D.1 Classification Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
D.2 Language Model Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
A EXTRA EXPERIMENTAL RESULTS
In this appendix, we provide more experimental results to further evaluate the Amortized Nesterov’s Momentum. Table 4 shows the detailed data of the parameter sweep experiments, where the convergence curves of these results are given in Appendix A.1. In Appendix A.2, we compare the robustness of AM1-SGD and M-SGD on large momentum parameters. In Appendix A.3, we empirically compare the Amortized Nesterov’s Momentum with classical momentum (Polyak, 1964), aggregated momentum (Lucas et al., 2019) and quasi-hyperbolic momentum (Ma & Yarats, 2019). We discuss the issues with learning rate schedulers in Appendix A.4. We report the test accuracy results of the ResNet18 experiment (in Section 4) in Appendix A.5. A CIFAR-100 experiment is provided in Appendix A.6. We also provide a sanity check for our implementation in Appendix A.7.
METHOD DESCRIPTION FINAL ACCURACY Avg. STD
A.1 THE EFFECT OF m ON CONVERGENCE
We show in Figure 6 how m affects the convergence of test accuracy. The results show that increasing m speeds up the convergence in the early stage. While for AM1-SGD the convergences of Option I and Option II are similar, AM2-SGD with Option II is consistently better than with Option I in this experiment. It seems that AM2-SGD with Option I does not benefit from increasing m and the algorithm is not robust. Thus, we do not recommend using Option I for AM2-SGD.
A.2 ROBUSTNESS ON LARGE MOMENTUM PARAMETERS
We compare the robustness of M-SGD and AM1-SGD when β is large in Figure 7 & Table 5. For fair comparison, AM1-SGD uses Option I. As we can see, the STD error of M-SGD scales up significantly when β is larger and the performance is more affected by a large β compared with AM1-SGD.
A.3 COMPARISON WITH OTHER MOMENTUM
In this section, we compare AM1-SGD (Option I) with classical momentum (Polyak, 1964), AggMo (Lucas et al., 2019) and QHM (Ma & Yarats, 2019) in our basic case study (training ResNet34 on
CIFAR-10). Since we are not aware of what makes a fair comparison with these methods (e.g., it is not clear what is the effective learning rate for AM1-SGD), we compare them based on the default hyper-parameter settings suggested by their papers.
Classical Momentum The SGD with classical momentum (CM-SGD) that is widely used in deep learning has the following scheme (standard PyTorch) (vcm ∈ Rd, vcm0 = 0):
vcmk+1 = β · vcmk +∇fik(xk), xk+1 = xk − η · vcmk+1, for k ≥ 0.
CM-SGD with its typical hyper-parameter settings (η0 = 0.1, β = 0.9) is observed to achieve similar generalization performance as M-SGD. However, CM-SGD is more unstable and prone to oscillations (Lucas et al., 2019), which makes it less robust than M-SGD as shown in Table 6.
Aggregated Momentum (AggMo) AggMo combines multiple momentum buffers, which is inspired by the passive damping from physics literature (Lucas et al., 2019). AggMo uses the following update rules (for t = 1, . . . , T , v(t) ∈ Rd, v(t)0 = 0):
v (t) k+1 = β (t) · v(t)k −∇fik(xk), for t = 1, . . . , T,
xk+1 = xk + η T · T∑ t=1 v (t) k+1, for k ≥ 0.
We used the exponential hyper-parameter setting recommended in the original work with the scalefactor a = 0.1 fixed, β(t) = 1− at−1, for t = 1, . . . , T and choosing T in {2, 3, 4}. We found that T = 2 gave the best performance in this experiment. As shown in Figure 8 & Table 6, with the help of passive damping, AggMo is more stable and robust compared with CM-SGD.
Quasi-hyperbolic Momentum (QHM) Ma & Yarats (2019) introduce the immediate discount factor ν ∈ R for the momentum scheme, which results in the QHM update rules (α ∈ R, vqh ∈ Rd, vqh0 = 0):
vqhk+1 = β · v qh k + (1− β) · ∇fik(xk),
xk+1 = xk − α · (ν · vqhk+1 + (1− ν) · ∇fik(xk)), for k ≥ 0. Here we used the recommended hyper-parameter setting for QHM (α0 = 1.0, β = 0.999, ν = 0.7).
Figure 8 shows that AM1-SGD, AggMo and QHM achieve faster convergence in the early stage while CM-SGD has the highest final accuracy. In terms of robustness, huge gaps are observed when comparing AM1-SGD with the remaining methods in Table 6. Note that AM1-SGD is more efficient than both QHM and AggMo, and is as efficient as CM-SGD.
We also plot the convergence of train-batch loss for all the methods in Figure 9. Despite of showing worse generalization performance, both QHM and AggMo perform better on reducing the trainbatch loss in this experiment, which is consistent with the results reported in Ma & Yarats (2019); Lucas et al. (2019).
A.4 ISSUES WITH LEARNING RATE SCHEDULERS
We show in Figure 10 that when β is large for the task, using step learning rate scheduler with decay factor 10, a performance drop is observed after each reduction. Both Option I and Option II have this issue and the curves are basically identical. Here we only use Option II. We fix this issue by performing a restart after each learning rate reduction (labeled with ‘+’). We plot the train-batch loss here because we find the phenomenon is clearer in this way. If β = 0.9, there is no observable performance drop in this experiment.
For smooth-changing schedulers such as the cosine annealing scheduler (Loshchilov & Hutter, 2016), the amortized momentum works well as shown in Figure 11.
A.5 TEST ACCURACY RESULTS OF FIGURE 4 & TABLE 2
We report the test accuracy results of the experiments in Section 4 in Figure 12 & Table 7. These results are reminiscent of the ResNet34 experiments (Figure 3 & Table 1).
A.6 CIFAR-100 EXPERIMENT
We report the results of training DenseNet121 (Huang et al., 2017) on CIFAR-100 in Figure 13, which shows that both AM1-SGD and AM2-SGD perform well before the final learning rate reduction. However, the final accuracies are lowered around 0.6% compared with M-SGD. We also notice that SGD reduces the train-batch loss at an incredibly fast rate and the losses it reaches are consistently lower than other methods in the entire 300 epochs. However, this performance is not
reflected in the convergence of test accuracy. We believe that this phenomenon suggests that the DenseNet model is actually “overfitting” M-SGD (since in the ResNet experiments, M-SGD always achieves a lower train loss than SGD after the final learning rate reduction).
A.7 A SANITY CHECK
When m = 1, both AM1-SGD and AM2-SGD are equivalent to M-SGD, we plot their convergence in Figure 14 as a sanity check (the detailed data is given in Table 4).
We observed that when m = 1, both AM1-SGD and AM2-SGD have a lower STD error than M-SGD. We believe that it is because they both maintain the iterates without scaling, which is numerically more stable than M-SGD (M-SGD in standard PyTorch maintains a scaled buffer, i.e., vptk = η −1β−1 · (yk − xk)).
B MISSING PARTS IN SECTION 4
B.1 THE REFORMULATIONS
When h ≡ 0 and β is a constant, we do the reformulations by eliminating the sequence {zk}. For the reformulated AM2-SGD,
xjkk = (1− β) · zk + β · φ k jk ,
zk+1 = zk − α · ∇fik(x jk k ),
φk+1jk = (1− β) · zk+1 + β · φ k jk ,(
x jk+1 k+1 = (1− β) · zk+1 + β · φ k+1 jk+1
) .
The reformulated AM2-SGD
α(1− β) = η Eliminate {zk} =========⇒
φk+1jk = x jk k − η · ∇fik(x jk k ),
x jk+1 k+1 = φ k+1 jk + β · ( φk+1jk+1 − φ k jk ) .
Algorithm 2
For the reformulated AM1-SGD, when h ≡ 0, the inner loops are basically SGD,
xk = (1− β) · zk + β · x̃s, zk+1 = zk − α · ∇fik(xk),
(xk+1 = (1− β) · zk+1 + β · x̃s.)
α(1− β) = η Eliminate {zk} =========⇒ xk+1 = xk − η · ∇fik(xk).
At the end of each inner loop (i.e., when (k + 1) mod m = 0), we have
x(s+1)m = (1− β) · z(s+1)m + β · x̃s,
while at the beginning of the next inner loop,
x(s+1)m = (1− β) · z(s+1)m + β · x̃s+1,
which means that we need to set xk+1 ← xk+1 + β · (x̃s+1 − x̃s) (reassign the value of xk+1). We also give the reformulation of M-SGD (scheme (1)) to the Auslender & Teboulle (2006) scheme for reference:
xk = (1− β) · zk + β · yk, zk+1 = zk − α · ∇fik(xk), yk+1 = (1− β) · zk+1 + β · yk,( xk+1 = (1− β) · zk+1 + β · yk+1 ) .
Auslender & Teboulle (2006) (AC-SA (Lan, 2012))
α(1− β) = η Eliminate {zk} =========⇒
yk+1 = xk − η · ∇fik(xk), xk+1 = yk+1 + β · (yk+1 − yk).
Nesterov (1983; 2013b)
AC-SA (in the Euclidean case) maps to the Auslender & Teboulle (2006) scheme through (in the original notations) x = xmd z = x y = xag
1− β = β−1t α = γt
.
Intuition for the Auslender & Teboulle (2006) scheme can be found in Remark 2 in Lan (2012).
B.2 PROOFS OF THEOREM 1 AND THEOREM 2
The reformulated schemes are copied here for reference:
AM1-SGD (reformulated, proximal)
Initialize: x̃0 = z0 = x0, S = K/m. 1: for s = 0, . . . , S − 1 do 2: for j = 0, . . . ,m− 1 do 3: k = sm+ j. 4: xk = (1− βs) · zk + βs · x̃s. 5: zk+1 = proxαsh {zk − αs ·∇fik(xk)}. 6: (xk+1 = (1− βs) · zk+1 + βs · x̃s.) 7: end for 8: x̃s+1 = 1 m ∑m j=1 xsm+j .
9: end for Output: x̃S .
AM2-SGD (reformulated, proximal)
Initialize: z0 = φ0j = x0,∀j ∈ [m]. 1: for k = 0, . . . ,K − 1 do 2: Sample jk uniformly in [m]. 3: xjkk = (1− βk) · zk + βk · φkjk . 4: zk+1 = proxαkh {zk − αk ·∇fik(x jk k )}.
5: φk+1jk = (1− βk) · zk+1 + βk · φ k jk . 6: end for
Output: φ̄K = 1m ∑m j=1 φ K j .
Comparing the reformulated schemes, we see that their iterations can be generalized as follows:
x = (1− β) · z + β · y, z+ = proxαh{z − α · ∇fi(x)}, y+ = (1− β) · z+ + β · y.
(6)
This type of scheme is first proposed in Auslender & Teboulle (2006), which represents one of the simplest variants of the Nesterov’s methods (see Tseng (2008) for other variants). The scheme is then modified into various settings (Hu et al., 2009; Lan, 2012; Ghadimi & Lan, 2012; 2016; Zhou et al., 2019; Lan et al., 2019) to achieve acceleration. The following lemma serves as a cornerstone for the convergence proofs of AM1-SGD and AM2-SGD.
Lemma 1. If α(1− β) < 1/L, the update scheme (6) satisfies the following recursion:
1 1− β ( F (y+)− F (x?) ) ≤ β 1− β ( F (y)− F (x?) ) + 1 2α ( ‖z − x?‖2 − ∥∥z+ − x?∥∥2) + (‖∇f(x)−∇fi(x)‖+M)2
2(α−1 − L(1− β)) + 〈∇f(x)−∇fi(x), z − x?〉 .
B.2.1 PROOF OF LEMMA 1
This Lemma is similarly provided in Lan (2012); Ghadimi & Lan (2012) under a more general setting that allows non-Euclidean norms in the assumptions, we give a proof here for completeness.
Based on the convexity (Assumption (a)), we have
f(x)− f(x?) ≤ 〈∇f(x), x− z〉︸ ︷︷ ︸ R0 + 〈∇f(x)−∇fi(x), z − x?〉︸ ︷︷ ︸ R1
+ 〈 ∇fi(x), z − z+ 〉︸ ︷︷ ︸ R2
+ 〈 ∇fi(x), z+ − x? 〉︸ ︷︷ ︸ R3 . (7)
We upper bound the terms on the right side one-by-one.
For R0,
R0 (?) =
β 1− β 〈∇f(x), y − x〉 ≤ β 1− β ( f(y)− f(x) ) , (8)
where (?) uses the relation between x and z, i.e., (1− β) · (x− z) = β · (y − x). For R2, based on Assumption (a), we have
f(y+)− f(x) + 〈 ∇f(x), x− y+ 〉 ≤ L
2
∥∥x− y+∥∥2 +M ∥∥x− y+∥∥ .
Then, noting that x− y+ = (1− β) · (z − z+), we can arrange the above inequality as
R2 ≤ L(1− β)
2
∥∥z − z+∥∥2 + 1 1− β ( f(x)− f(y+) ) + 〈 ∇f(x)−∇fi(x), z+ − z 〉 +M
∥∥z − z+∥∥ ≤ L(1− β)
2
∥∥z − z+∥∥2 + 1 1− β ( f(x)− f(y+) ) + ( ‖∇f(x)−∇fi(x)‖+M ) ∥∥z − z+∥∥ . Using Young’s inequality with ζ > 0, we obtain
R2 ≤ L(1− β) + ζ
2
∥∥z − z+∥∥2 + 1 1− β ( f(x)− f(y+) ) + (‖∇f(x)−∇fi(x)‖+M)2 2ζ . (9)
For R3, based on the optimality condition of proxαh{z − α · ∇fi(x)} and denoting ∂h(z+) as a subgradient of h at z+, we have for any u ∈ X ,〈
α · ∂h(z+) + z+ − z + α · ∇fi(x), u− z+ 〉 ≥ 0,〈
∇fi(x), z+ − u 〉 ≤ 〈 ∂h(z+), u− z+ 〉 + 1
α
〈 z+ − z, u− z+ 〉 ≤ h(u)− h(z+) + 1
α
〈 z+ − z, u− z+ 〉 .
Choosing u = x?,
R3 ≤ h(x?)− h(z+) + 1
α
〈 z+ − z, x? − z+ 〉 (?) = h(x?)− h(z+) + 1
2α
( ‖z − x?‖2 − ∥∥z+ − x?∥∥2 − ∥∥z+ − z∥∥2) , (10) where (?) follows from ‖a+ b‖2 = ‖a‖2 + ‖b‖2 + 2 〈a, b〉. Finally, by upper bounding (7) using (8), (9), (10), we conclude that
f(x)− f(x?) ≤ R1 + β 1− β ( f(y)− f(x) ) + L(1− β) + ζ − α−1 2 ∥∥z − z+∥∥2 + 1
1− β ( f(x)− f(y+) ) + h(x?)− h(z+) + (‖∇f(x)−∇fi(x)‖+M) 2 2ζ
+ 1
2α
( ‖z − x?‖2 − ∥∥z+ − x?∥∥2) , After simplification,
1 1− β ( f(y+)− f(x?) ) ≤ β 1− β ( f(y)− f(x?) ) + L(1− β) + ζ − α−1 2 ∥∥z − z+∥∥2 + h(x?)− h(z+) + (‖∇f(x)−∇fi(x)‖+M) 2
2ζ +R1
+ 1
2α
( ‖z − x?‖2 − ∥∥z+ − x?∥∥2) . (11)
Note that with the convexity of h and y+ = (1− β) · z+ + β · y, we have h(y+) ≤ (1− β)h(z+) + βh(y),
h(z+) ≥ 1 1− β h(y+)− β 1− β h(y).
Using the above inequality and choosing ζ = α−1 − L(1 − β) > 0 ⇒ α(1 − β) < 1/L, we can arrange (11) as
1 1− β ( F (y+)− F (x?) ) ≤ β 1− β ( F (y)− F (x?) ) + 1 2α ( ‖z − x?‖2 − ∥∥z+ − x?∥∥2) + (‖∇f(x)−∇fi(x)‖+M)2
2(α−1 − L(1− β)) +R1.
B.2.2 PROOF OF THEOREM 1A
Using Assumption (c), Lemma 1 with x = xk z = zk z+ = zk+1 y = x̃s y+ = xk+1 α = αs β = βs , (12)
and taking expectation, if αs(1− βs) < 1/L, we have 1 1− βs ( Eik [F (xk+1)]− F (x?) ) + 1 2αs Eik [ ‖zk+1 − x?‖2 ] ≤ βs
1− βs ( F (x̃s)− F (x?) ) + 1 2αs ‖zk − x?‖2 +
(σ +M)2
2(α−1s − L(1− βs)) .
Summing the above inequality from k = sm, . . . , sm+m− 1, we obtain
1
(1− βs)m m∑ j=1 ( E [F (xsm+j)]− F (x?) ) + 1 2αsm E [∥∥z(s+1)m − x?∥∥2]
≤ βs 1− βs
( F (x̃s)− F (x?) ) + 1
2αsm ‖zsm − x?‖2 +
(σ +M)2
2(α−1s − L(1− βs)) ,
Using the definition of x̃s+1 and convexity,
αs 1− βs
( E [F (x̃s+1)]− F (x?) ) + 1 2m E [∥∥z(s+1)m − x?∥∥2]
≤ αsβs 1− βs
( F (x̃s)− F (x?) ) + 1
2m ‖zsm − x?‖2 +
αs(σ 2 +M2)
α−1s − L(1− βs) .
(13)
It can be verified that with the choices βs = ss+2 and αs = λ1 L(1−βs) , the following holds for s ≥ 0,
αs+1βs+1 1− βs+1 ≤ αs 1− βs and β0 = 0. (14)
Note that since our analysis aims at providing intuition, we do not refine the choice of αs as in (Hu et al., 2009; Ghadimi & Lan, 2012). Thus, by telescoping (13) from s = S − 1, . . . , 0, we obtain
αS−1 1− βS−1
( E [F (x̃S)]− F (x?) ) + 1 2m E [ ‖zSm − x?‖2 ] ≤ 1
2m ‖x0 − x?‖2 + S−1∑ s=0 αs(σ 2 +M2) α−1s − L(1− βs) ,
and thus,
E [F (x̃S)]− F (x?) ≤ 2L
λ1m(S + 1)2 ‖x0 − x?‖2 +
4L(σ2 +M2)
λ1(S + 1)2 S−1∑ s=0 α2s 1− αs(1− βs)L
(a) ≤ 2L λ1m(S + 1)2 ‖x0 − x?‖2 + 3λ1(σ 2 +M2) L(S + 1)2 S−1∑ s=0 (s+ 2)2
(b) ≤ 2L λ1m(S + 1)2 ‖x0 − x?‖2 + 8λ1(σ 2 +M2)(S + 1) L ,
where (a) follows from λ1 ≤ 23 and (b) holds because 0 ≤ x 7→ (x+2) 2 is non-decreasing and thus
S−1∑ s=0 (s+ 2)2 ≤ ∫ S 0 (x+ 2)2dx ≤ (S + 2) 3 3 ≤ 8(S + 1) 3 3 .
Denoting
λ?1 , L ‖x0 − x?‖
2 √ m √ σ2 +M2(S + 1) 3 2 ,
and based on the choice of λ1 = min { 2 3 , λ ∗ 1 } , if λ∗1 ≤ 23 , we have
E [F (x̃S)]− F (x?) ≤ 8 ‖x0 − x?‖
√ σ2 +M2
m 1 2 (S + 1) 1 2
.
If λ∗1 > 2 3 ,
E [F (x̃S)]− F (x?) ≤ 3L ‖x0 − x?‖2
m(S + 1)2 +
4 ‖x0 − x?‖ √ σ2 +M2
m 1 2 (S + 1) 1 2
.
Thus, we conclude that
E [F (x̃S)]− F (x?) ≤ 3L ‖x0 − x?‖2
m(S + 1)2 +
8 ‖x0 − x?‖ √ σ2 +M2
m 1 2 (S + 1) 1 2
.
Substituting S = K/m completes the proof.
B.2.3 PROOF OF THEOREM 1B
In order to prove Theorem 1b, we need the following known result for the martingale difference (cf. Lemma 2 in Lan et al. (2012)):
Lemma 2. With N > 0, let ξ0, ξ1, . . . , ξN−1 be a sequence of i.i.d. random variables, for t = 0, . . . , N − 1, σt > 0 be a deterministic number and ψt = ψt(ξ0, . . . , ξt) be a deterministic measurable function such that Eξt [ψt] = 0 a.s. and Eξt [ exp{ψ2t /σ2t } ] ≤ exp{1} a.s.. Then for any Λ ≥ 0,
Prob N−1∑ t=0 ψt ≥ Λ √√√√N−1∑ t=0 σ2t ≤ exp{−Λ2/3}. To start with, using Lemma 1 with the parameter mapping (12), we have
1 1− βs ( F (xk+1)− F (x?) ) + 1 2αs ‖zk+1 − x?‖2
≤ βs 1− βs
( F (x̃s)− F (x?) ) + 1
2αs ‖zk − x?‖2
+ (‖∇f(xk)−∇fik(xk)‖+M)2
2(α−1s − L(1− βs)) + 〈∇f(xk)−∇fik(xk), zk − x?〉
≤ βs 1− βs
( F (x̃s)− F (x?) ) + 1
2αs ‖zk − x?‖2 +
M2
α−1s − L(1− βs)
+ ‖∇f(xk)−∇fik(xk)‖
2
α−1s − L(1− βs) + 〈∇f(xk)−∇fik(xk), zk − x?〉 .
Summing the above inequality from k = sm, . . . , sm+m− 1 and using the choice αs = λ1L(1−βs) with λ1 ≤ 23 , we obtain
αs 1− βs
( F (x̃s+1)− F (x?) ) + 1
2m ∥∥z(s+1)m − x?∥∥2 ≤ αsβs
1− βs ( F (x̃s)− F (x?) ) + 1 2m ‖zsm − x?‖2 + 3α2sM2
+ 3α2s m sm+m−1∑ k=sm ‖∇f(xk)−∇fik(xk)‖ 2 + αs m sm+m−1∑ k=sm 〈∇f(xk)−∇fik(xk), zk − x?〉.
With our parameter choices, the relations in (14) hold and thus we can telescope the above inequality from s = S − 1, . . . , 0,
αS−1 1− βS−1
( F (x̃S)− F (x?) ) ≤ 1
2m ‖x0 − x?‖2 + 3M2 S−1∑ s=0 α2s
+ 3
m K−1∑ k=0 α2bk/mc ‖∇f(xk)−∇fik(xk)‖ 2
︸ ︷︷ ︸ R4
+ 1
m K−1∑ k=0
αbk/mc 〈∇f(xk)−∇fik(xk), zk − x?〉︸ ︷︷ ︸ R5 .
(15)
Denoting V2k , ‖∇f(xk)−∇fik(xk)‖ 2, ᾱ = ∑K−1 k=0 α 2 bk/mc = m ∑S−1 s=0 α 2 s, for R4, by Jensen’s inequality, we have
E [ exp { 1
ᾱ K−1∑ k=0 α2bk/mcV 2 k/σ 2 }] ≤ 1 ᾱ K−1∑ k=0 α2bk/mcE [ exp { V2k/σ2 }] (?) ≤ exp{1},
where (?) uses the additional assumption Eik [ exp { V2k/σ2 }] ≤ exp{1}.
Then, based on Markov’s inequality, we have for any Λ ≥ 0,
Prob { exp { 1
ᾱ K−1∑ k=0 α2bk/mcV 2 k/σ 2
} ≥ exp{Λ + 1} } ≤ exp{−Λ},
Prob { R4 ≥ (Λ + 1)σ2m
S−1∑ s=0 α2s
} ≤ exp{−Λ}. (16)
For R5, since we have Eik [ αbk/mc 〈∇f(xk)−∇fik(xk), zk − x?〉 ] = 0 and
Eik
[ exp { α2bk/mc 〈∇f(xk)−∇fik(xk), zk − x ?〉2
α2bk/mcσ 2D2X
}] ≤ Eik [ exp { V2k/σ2 }] ≤ exp{1},
which is based on the “light tail” assumption, using Lemma 2, we obtain
Prob R5 ≥ ΛσDX √√√√m S−1∑
s=0
α2s ≤ exp{−Λ2/3}. (17) Combining (15), (16) and (17), based on the parameter setting (cf. (5)) and using the notation
K0(m) , 3Lm ‖x0 − x?‖2
(K +m)2 +
8 ‖x0 − x?‖ √ σ2 +M2√
K +m ,
R6 , 12Lσ2
λ1(S + 1)2 S−1∑ s=0 α2s + 4LσDX λ1(S + 1)2 √ m √√√√S−1∑ s=0 α2s,
we conclude that
Prob {F (x̃S)− F (x?) ≤ K0(m) + ΛR6} ≥ 1− (exp{−Λ2/3}+ exp{−Λ}).
For R6, using the choice of αs and λ1, we obtain
R6 ≤ 4 √ 6σDX
3 √ K +m
+ 8λ1σ
2(S + 1) L ≤ 4 √ 6σDX 3 √ K +m + 4σ2 ‖x0 − x?‖√ K +m √ σ2 +M2
≤ 4σ ( 3 ‖x0 − x?‖+ √ 6DX ) 3 √ K +m ,
which completes the proof.
B.2.4 PROOF OF THEOREM 2
Using Assumption (c), Lemma 1 with x = xjkk z = zk z+ = zk+1 y = φkjk y+ = φk+1jk α = αk β = βk ,
and taking expectation, if αk(1− βk) < 1/L, we have 1
1− βk Eik,jk
[ F (φk+1jk )− F (x ?) ] + 1
2αk Eik,jk
[ ‖zk+1 − x?‖2 ] ≤ βk
1− βk Ejk
[ F (φkjk)− F (x ?) ] + 1
2αk ‖zk − x?‖2 +
(σ +M)2
2(α−1k − L(1− βk)) .
(18)
Note that
Eik,jk [ F (φk+1jk )− F (x ?) ]
= Eik,jk m∑ j=1 ( F (φk+1j )− F (x ?) ) − m∑ j 6=jk ( F (φkj )− F (x?) ) = Eik,jk m∑ j=1 ( F (φk+1j )− F (x ?) )− Ejk m∑ j 6=jk ( F (φkj )− F (x?)
) . Dividing both sides of (18) by m and then adding 1(1−βk)mEjk [∑m j 6=jk ( F (φkj )− F (x?) )] to both sides, we obtain
1
1− βk Eik,jk 1 m m∑ j=1 F (φk+1j )− F (x ?) + 1 2αkm Eik,jk [ ‖zk+1 − x?‖2 ]
≤ − 1 m Ejk [ F (φkjk)− F (x ?) ] + 1 1− βk 1 m m∑ j=1 F (φkj )− F (x?) + 1 2αkm ‖zk − x?‖2
+ (σ +M)2
2m(α−1k − L(1− βk))
= 1− 1−βkm 1− βk 1 m m∑ j=1 F (φkj )− F (x?) + 1 2αkm ‖zk − x?‖2 + (σ +M)2 2m(α−1k − L(1− βk)) . (19)
It can be verified that with our parameters choice: βk = k/m k/m+2 and αk = λ2 L(1−βk) , the following holds for k ≥ 0,
αk+1 1− 1−βk+1m 1− βk+1 ≤ αk 1− βk and β0 = 0.
Note that since our analysis aims at providing intuition, we do not refine the choice of αs as in (Hu et al., 2009; Ghadimi & Lan, 2012). Then, we can telescope (19) from k = K − 1, . . . , 0, which results in
αK−1 1− βK−1 E 1 m m∑ j=1 F (φKj )− F (x?) + 1 2m E [ ‖zK − x?‖2 ]
≤ λ2(m− 1) Lm
( F (x0)− F (x?) ) + 1
2m ‖x0 − x?‖2 + K−1∑ k=0
αk(σ +M) 2
2m(α−1k − L(1− βk)) .
Using the definition of φ̄K and convexity, we obtain
E [ F (φ̄K)− F (x?) ] ≤ 1− βK−1
αK−1
( λ2(m− 1)
Lm
( F (x0)− F (x?) ) + 1
2m ‖x0 − x?‖2 ) +
1− βK−1 αK−1 K−1∑ k=0
αk(σ +M) 2
2m(α−1k − L(1− βk))
(a) =
4(m− 1) ( F (x0)− F (x?) ) m ( K−1 m + 2 )2 + 2L ‖x0 − x?‖2 λ2m ( K−1 m + 2
)2 + 3λ2(σ +M) 2
2Lm ( K−1 m + 2 )2 K−1∑ k=0 ( k m + 2 )2 (b)
≤ 4(m− 1)
( F (x0)− F (x?) ) m ( K−1 m + 2 )2 + 2L ‖x0 − x?‖2 λ2m ( K−1 m + 2
)2 (20) + 4λ2(σ +M) 2 ( K−1 m + 2 ) L ,
where (a) uses λ2 ≤ 23 , (b) follows from simple integration arguments and that K m + 2 ≤ 2 ( K−1 m + 2 ) since K ≥ 1,m ≥ 1.
Based on the choice of
λ2 = min 23 , L ‖x0 − x?‖√2m(σ +M) (K−1m + 2) 32 ,
(20) can be further upper bounded as E [ F (φ̄K)− F (x?) ] ≤ 4(m− 1) ( F (x0)− F (x?) ) m ( K−1 m + 2 )2 + 3L ‖x0 − x?‖2 m ( K−1 m + 2 )2 +4√2 ‖x0 − x?‖ (σ +M) m 1 2 ( K−1 m + 2 ) 1 2 .
B.3 CONNECTIONS BETWEEN AM1-SGD AND KATYUSHA
The discussion in this section aims to shed light on the understanding of the experimental results, which also shows some interesting relations between AM1-SGD and Katyusha.
The high level idea of Katyusha momentum is that it works as a “magnet” inside an epoch of SVRG updates, which “stabilizes” the iterates so as to make Nesterov’s momentum effective (Allen-Zhu, 2018). In theory, the key effect of Katyusha momentum is that it allows the tightest possible variance bound for the stochastic gradient estimator of SVRG (cf. Lemma 2.4 and its comments in AllenZhu (2018)). In this sense, we can interpret Katyusha momentum as a variance reducer that further reduces the variance of SVRG. Below we show the similarity between the construction of Katyusha and AM1-SGD, based on which we conjecture that the amortized momentum can also reduce the variance of SGD (and thus increase the robustness). However, in theory, following a similar analysis of Katyusha, we cannot guarantee a reduction of σ in the worst case.
Deriving AM1-SGD from Katyusha Katyusha has the following scheme (non-proximal, in the original notations, σ is the strong convexity parameter, cf. Algorithm 1 with Option I in Allen-Zhu (2018))12:
Initialize: x̃0 = y0 = z0 = x0, η = 13L , ω = 1 + ασ. 1: for s = 0, . . . , S − 1 do 2: Compute and store∇f(x̃s). 3: for j = 0, . . . ,m− 1 do 4: k = sm+ j. 5: xk = τ1 · zk + τ2 · x̃s + (1− τ1 − τ2) · yk. 6: ∇̃k = ∇fik(xk)−∇fik(x̃s) | 1. What are the main contributions and novel aspects introduced by the paper regarding Nesterov momentum?
2. What are the strengths of the paper, particularly in the theoretical analysis, connections to mirror descent and Katyusha, and the proof of optimal convergence in the convex setting?
3. Do you have any concerns or questions about the experimental results, such as the unconventional learning rate schedule, unclear discussion on train-batch loss vs. full-batch loss, reporting peak test accuracy, significance of results on ImageNet and PTB, usefulness of "Test Accuracy STD%", and robustness definition?
4. Are there any minor comments or suggestions for improving the paper, such as removing grid lines and making colors more differentiable in Figure 1b, using consistent notation in Algorithm 1, adjusting spacing, and providing more informative acronyms? | Review | Review
This paper proposes two variants of Nesterov momentum that maintains a buffer of recent updates. The paper proves optimal convergence in the convex setting and makes nice connections to mirror descent and Katyusha.
I vote to reject the submission, with my main concerns being with the experimental results. I would consider raising my score if my concerns are addressed.
Concerns
-The learning rate schedule on the CIFAR experiments is unconventional. The original ResNet paper trains for 64k iterations (roughly 160 epochs). It’s standard to train for at least 200 epochs (see schedule from Smith et al.). Do the results hold under the standard schedule with careful tuning for the baselines?
-The discussion on “Train-batch loss vs. Full-batch loss” in Section 2 is unclear. On smaller datasets, it is feasible to perform evaluation at the end of the epoch on the entire batch. Furthermore, reporting test accuracy couples optimization and generalization, and I am not sure what is meant by the statement ``test accuracy is too informative.”
-Reporting the peak test accuracy is strange. In general, we do not have access to the test set. It’s more natural to report the final test accuracy or have a holdout set to determine an iteration for evaluation.
-It’s not clear how significant the results on ImageNet and PTB are. Namely, a comparison to AggMo and/or QHM would be good, since the flavor of these algorithms is quite similar. Experiments in the AggMo paper suggest that AggMo performs better on PTB. In the comparison given in Appendix A3, it seems like AggMo performs slightly better throughout training. SGD should also attain better performance with learning rate tuning on ImageNet.
-I’m not sure how useful “Test Accuracy STD%” is useful as a metric since it is influenced heavily by the learning rate and its schedule. Tail averaging schemes in general seem like they would increase “robustness.” In addition, there seem to be situations where a higher final variance is beneficial (just run the method for longer and you can find a better solution). It would be nice to expand the discussion on the notion of robustness defined in the paper.
Minor Comments
-The dashed line in figure 1b) is hard to read. I would recommend removing the grid lines and make the colors more differentiable.
-Algorithm 1: use of both assignment and equality operator? Whereas other boxes use equality
-Spacing looks a bit off in parts of the paper. 1) after the first sentence in the introduction 2) “generic optimization layer (Defazio, 2018) . However”)
-M-SGD and M-SGD2 can be potentially confusing and are not too informative as acronyms.
-Remark on Theorem 1b: depicts does not seem like the right word
Smith, S. L., Kindermans, P. J., Ying, C., & Le, Q. V. (2017). Don't decay the learning rate, increase the batch size. arXiv preprint arXiv:1711.00489. |
ICLR | Title
Understanding the robustness-accuracy tradeoff by rethinking robust fairness
Abstract
Although current adversarial training (AT) methods can effectively improve the robustness on adversarial examples, they usually lead to a decrease in accuracy, called the robustness-accuracy trade-off. In addition, researchers have recently discovered a robust fairness phenomenon in the AT model; that is, not all categories of the dataset have experienced a serious decline in accuracy with the introduction of AT methods. In this paper, we explore the relationship between the robustness-accuracy tradeoff and robust fairness for the first time. Empirically, we have found that AT will cause a substantial increase in the inter-class similarity, which could be the root cause of these two phenomena. We argue that the label smoothing (LS) is more than a trick in AT. The smoothness learned from LS can help reduce the excessive inter-class similarity caused by AT, and also reduce the intra-class variance, thereby significantly improving accuracy. Then, we explored the effect of another classic smoothing regularizer, namely, the maximum entropy (ME), and we have found ME can also help reduce both inter-class similarity and intra-class variance. Additionally, we revealed that TRADES actually implies the function of ME, which can explain why TRADES usually performs better than PGD-AT on robustness. Finally, we proposed the maximum entropy PGD-AT (ME-AT) and the maximum entropy TRADES (ME-TRADES), and experimental results show that our methods can significantly mitigate both tradeoff and robust fairness.
1 INTRODUCTION
1.1 BACKGROUND
Deep neural networks (DNNs) have been proven to be vulnerable to the adversarial attacks, as demonstrated in (Szegedy; Goodfellow et al.; Kurakin et al.; Carlini & Wagner). By adding crafted imperceptible perturbations to the input, attackers can easily fool the model to give an incorrect prediction. To defend against adversarial attacks, tens of methods have been proposed, but most of them later proved to be ineffective (Athalye et al., 2018). Among these many defense techniques, adversarial training (AT) (Madry et al., 2017) has been proven to be the most effective strategy against adversarial attacks.
Although current AT algorithms can effectively improve model robustness, there are two confusing phenomena in AT models. First, there can be an inevitable robustness-accuracy tradeoff (Tsipras et al., 2018) in AT models in which increasing robustness is always accompanied by an accuracy drop. Second, recently Xu et al. (2021) found that AT tends to introduce severe disparities in accuracy and robustness between different classes. For example, as shown in Figure 1b, in a PGD-AT model (Madry et al., 2017), both the accuracy and robustness of the 3rd class cat are much lower than those of the 1st class car, while the two classes have similar accuracies in the standard training model (see Figure 1a). This phenomenon is defined as the robust fairness according to the authors.
Additionally, as Xu et al. (2021) mentioned, the robust fairness problem is closely related to the robustness-accuracy tradeoff, because the average accuracy drop in the robustness-accuracy tradeoff could mainly come from the classes that are hard to classify in AT. To verify this, we have measured the accuracy drop for each class, and calculated their percentage in the total accuracy drop, as shown in Figure 1c. We can see that it only takes two classes (the cat and bird) to contribute almost half of
the accuracy drop, while the two classes have the lowest accuracy and robustness than other classes in AT. That is, these hard classified classes have a significantly greater impact on the decline in accuracy, and to better understand the robustness-accuracy tradeoff, it should be determined why these classes are so difficult to classify in AT.
To explain the phenomenon, Xu et al. (2021) argued that some classes are difficult to classify in AT because they are intrinsically “harder” to classify than other classes, and AT tends to hurt both accuracy and robustness of these “hard” classes. To verify this point of view, these authors studied the effect of AT on a binary classification task under a mixture Gaussian distribution, and the “hard” class is the one with larger variance in their case. They showed that AT will push the decision boundary closer to the larger variance class and further worsen both the accuracy and robustness of the class.
However, although they showed that the class with a larger variance is more difficult to classify, there still remains a question; that is, is variance enough to describe the “hard” degree of a class? Imagine two Gaussian distributions both have a high variance, but also an extremely large difference of mean values, they should still be well classified. On the contrary, when the two Gaussian distributions both have a low variance, but the mean values of them are extremely similar, which makes the two distributions severely overlap, we cannot satisfactorily classify them instead. That is, the inter-class similarity is also an important factor affecting the model’s accuracy. With this point in mind, we have measured both inter-class similarity and intra-class variance in standard training, PGD-AT and TRADES models (Zhang et al., 2019) for each class in the CIFAR10 test set, as shown in Figure 2.
The measurement is performed in the penultimate layer feature space. For each class, we use the variance of features as the class’s intra-class variance. To measure the inter-class similarity of each
class, we first calculate the feature mean value vectors for all classes, and then the cosine similarity between the mean value vectors of the measured class and other classes. The largest cosine similarity is used as the inter-class similarity of the measured class in this paper. It is somewhat surprising to see that both PGD-AT and TRADES models have a lower variance than the standard training model in Figure 2b, while they have a worse accuracy instead. However, as shown in Figure 2a, both PGD-AT and TRADES can lead to a higher inter-class similarity than standard training. In particular, we notice that the “hardest” class cat does not have the largest variance no matter in PGD-AT, TRADES or the standard training model, but has the largest inter-class similarity. These observations have challenged Xu et al. (2021)’s theory that the “hard” classes are the large variance classes and indicate that inter-class similarity does matter in AT, thus motivated us to study both robust fairness phenomenon and robustness-accuracy tradeoff toward the increased inter-class similarity.
1.2 OUR CONTRIBUTIONS
Understand the robustness-accuracy tradeoff & robust fairness. To the best of our knowledge, we are the first to study the relationship between the robustness-accuracy tradeoff and the robust fairness, and we find that the two phenomena could both come from the increased inter-class similarity caused by AT. More specifically, through our single AT and binary classification AT experiments in section 2, we find that:
• AT will cause a general increase in inter-class similarity for each class, which even causes a feature overlap, and finally leads to the accuracy drop in the tradeoff.
• The “hard” classes in AT are actually similar classes in standard training, and the increased inter-class similarity in AT makes them more similar and harder to be classified, which causes the robust fairness problem.
Re-investigate the effect of smoothing regularizer in AT. Label smoothing (LS) (Szegedy et al., 2016) has been used as a trick to benchmark the robustness in AT by Pang et al. (2020), however, we noticed that LS can not only help improve the robustness, but usually improve the accuracy too, which means a reduction in the robustness-accuracy tradeoff. In this paper, we find LS can help alleviate the tradeoff because it helps reduce the large inter-class similarity in AT, and also provides a lower intra-class variance. Then, we investigate the effect of the maximum entropy (ME) Pereyra et al. (2017), which is also a classic smoothing regularizer, and we find ME can help reduce both inter-class similarity and intra-class variance too. In addition, we find that the state-of-the-art AT method TRADES can be seen as a special maximum entropy learning, which could explain why TRADES model have a lower intra-class variance than PGD-AT model in Figure 2, and usually performs better than PGD-AT in terms of robustness. We proposed the maximum entropy PGD-AT (ME-AT) and the maximum entropy TRADES (ME-TRADES), and experimental results show that our methods can significantly mitigate both tradeoff and robust fairness.
2 RETHINKING ROBUST FAIRNESS TOWARD INTER-CLASS SIMILARITY
In Figure 2a, we have shown that AT models have higher inter-class similarity than the standard training model. In this section, we design two experiments to see how the high inter-class similarity in AT is related to both the robustness-accuracy tradeoff and robust fairness phenomena.
2.1 SINGLE ADVERSARIAL TRAINING
AT causes a feature overlap. We design the single adversarial training (single AT) to see how AT affect one single class. In single AT, we only conduct adversarial training on one class while training other classes normally. For better visualization, we adjust the penultimate layer of a ResNet-18 model to output 2-D features. In Figure 3, we show the single AT results of the two most representative classes: the “hardest” class cat and the “easiest” class car. The results of other classes and detailed settings are both provided in Appendix A.1. In Figure 3b, when single adversarial train the 3rd class cat, the features of the cat severely overlap with those of the 5th class dog, and the overlapping features make the class cat almost impossible to classify (only has 7.03 natural accuracy and 0 PGD-10 robustness). This observation intuitively shows how the inter-class similarity increases in
AT, and proves that the accuracy drop part in the robustness-accuracy tradeoff could come from the increased inter-class similarity (the overlapping features).
The increase in inter-class similarity is general in AT. However, when single AT is carried out for the 1st class car, the features of the class car can still be split well from other classes, and both the accuracy and PGD-10 robustness of the class car achieve a high level (98.4 and 72.2 respectively, see Figure 3a). Does this mean that the “easy” classes can avoid an increase in inter-class similarity in AT?
To check this, we measure the inter-class similarity in the single AT models, and for comparison, we also measured the inter-class similarity of a standard training 2-D features ResNet-18 model. As shown in Figure 4, each class in the blue line represents the inter-class similarity of the class in the corresponding single AT model (e.g., the point of the class car in the blue line represents the interclass similarity of the class car in the single car AT model), and the yellow line is the inter-class similarity of the standard training model. We can see that even the “easiest” class car in the single AT model can have a higher inter-class similarity than that in the standard training model. This observation turns out that the increase in inter-class similarity is general in AT for all classes.
2.2 BINARY CLASSIFICATION ADVERSARIAL TRAINING
“Hard” classes or similar classes? Since the increase in inter-class similarity is general for all classes, we assume that some classes are difficult to classify in AT possibly because they are already similar in standard training, and the increased inter-class similarity caused by AT makes them more similar and become the “hard” classes. To verify this assumption, we conduct the binary classification AT experiments. We set the class cat to binary classify with other classes in CIFAR10 dataset, and we use both PGD-AT (Madry et al., 2017) and TRADES (Zhang et al., 2019) to train our binary classification ResNet-18 models (512-D features here).
We plot the natural error and PGD-10 error of the PGD-AT and TRADES trained binary classification models in Figure 5a and Figure 5b respectively. Classes in the horizontal axis represent the classes that binary classified with the class cat, and is sorted from small to large by their similarity
with the cat in standard training. We find that both natural error and PGD-10 error in the binary classification PGD-AT and TRADES models are highly positive correlated with the similarity in standard training. For example, the class car is the least similar class of the cat in standard training, when binary classified cat with the car, model can get both low natural error and PGD-10 error (4.6 and 11.0); However, when binary classified cat with the most similar class dog, the ResNet-18 model even failed to converge in PGD-AT (49.7 for both natural and PGD-10 error), and even model can converge in TRADES, it is also in both highest natural error and PGD-10 error (23.4 and 44.0). This observation indicates that the “hard” classes in AT could actually be the similar classes in standard training.
2.3 UNDERSTANDING THE TRADEOFF & ROBUST FAIRNESS
To briefly summarize, through our single AT and binary classification AT experiments, we find the following:
• AT will even cause a feature overlap to the “hard” classes, which leads to a severe accuracy drop.
• The increase in inter-class similarity is general in AT for all classes.
• “Hard” classes in AT may actually be similar classes in standard training for generally increased inter-class similarity.
These findings indicate that the increased inter-class similarity could be the root cause of both robustness-accuracy tradeoff and robust fairness problems, and indicate a new way to mitigate the tradeoff: To obtain better robustness and accuracy, the excessive inter-class similarity in AT should be reduced (while not increasing the intra-class variance). And in the next section, we show this way is promising by the effect of smoothing regularizer.
Our explanation. Finally, we provide an intuitive explanation for why AT leads to higher interclass similarity. The core object of AT is to force adversarial examples under the same distribution as the well-classified clean examples. To achieve this object, TRADES directly minimizes the kldivergence between adversarial and clean examples and minimizes the cross-entropy loss of clean examples to achieve high accuracy. In PGD-AT, this object is implicit. PGD-AT directly minimizes the cross-entropy loss of adversarial examples, and because adversarial examples can be seen as a robust lower bound of clean examples, it actually forces both adversarial examples and clean examples to fit the same one-hot label. However, forcing adversarial examples to fit the distribution of clean examples may also lead clean examples to be closer to the distribution of adversarial examples. While adversarial examples are naturally closer to other classes, the adoption of adversarial examples could pull classes closer, then resulting in higher inter-class similarity.
3 THE ROLE OF THE SMOOTHING REGULARIZER
Based on our findings in section 2, in this section, we argue that LS is not only a trick in AT. We find that smoothness learned from LS can significantly reduce both inter-class similarity and intra-class variance in AT, which is the remedy to current AT methods. Then, we investigate the effect of the ME, which is also a classic smoothing regularizer, and we find TRADES can be seen as a special ME learning. Finally, we proposed the ME-AT and ME-TRADES to mitigate both robustness-accuracy tradeoff and robust fairness.
3.1 LABEL SMOOTHING IS NOT ONLY A TRICK IN AT
LS has been recently used as a trick to benchmark the robustness by Pang et al. (2020). However, we noticed that LS can usually help improve accuracy too, which means a reduction in the robustnessaccuracy tradeoff. In this paper, we find LS can help mitigate the tradeoff for reasons. By visualizing the penultimate layer feature representations of standard models with/without LS in a 2-D figure, Müller et al. (2019) showed that LS encourages the features of training examples from the same class to group in tighter clusters. That is, LS could help reduce the inter-class similarity and inrta-class variance in standard training. To see if LS has the same effect in AT, we measured the inter-class similarity and intra-class variance in the PGD-AT and TRADES models with/without LS. As shown in Figure 6, we find LS can indeed reduce both similarity and variance.
Therefore, we argue that LS is more than a trick in AT. Compared to standard training, LS is more significant in AT for reducing the excessive inter-class similarity of robust models. As a result, in Table 1, we can see that LS can more significantly improve accuracy in both the PGD-AT and TRADES models than in the standard training model. We also measured the robustness under AutoAttack (AA) (Croce & Hein, 2020), which is currently the most effective adversarial attack, and we can see that LS also increases AA accuracy in PGD-AT by 1.12.
However, we find that when adding LS into TRADES, the AA robustness becomes 0.2 lower than the original TRADES model, as shown in Table 1. This could happen because LS will cause a loss of information in the logits, and hence weaken the discriminative power of the trained models (Müller et al., 2019), which could hurt the performance of semi-supervised learning (knowledge distillation in Müller et al. (2019)’s case and TRADES here). This pitfall of LS motivates us to investigate the effect of another classic smoothing regularizer, namely, the ME.
3.2 UNDERSTANDING THE EFFECT OF MAXIMUM ENTROPY
3.2.1 PROPOSED METHOD
ME learning. Let us first take a brief look at the ME learning. Let pθ(x) be the probability distribution of input x produced by a DNN model. The entropy of this conditional distribution is given by:
H(pθ(x)) = − ∑ i pθ(x)i log pθ(x)i
By adding the negative entropy term into the cross-entropy loss during training, the object of ME learning is defined as:
LME(θ) = CE(x,y)− βH (pθ(x)) , (β > 0) where y represents the one-hot labels, and CE is the cross-entropy function:
CE(x,y) = − ∑ i yi log pθ(x)i
TRADES is a special ME learning. Then, we show that the sota AT method TRADES is also a ME learning. Let us recall that the objective function of TRADES is: LTRADES(θ) = CE(x,y) + λ ·KL(pθ(x))‖pθ(x′)) where x′ is the adversarial example, and KL is the kl-divergence, which is given by:
KL(pθ(x))‖pθ(x′)) = ∑ i pθ(x)i log pθ(x)i − ∑ i pθ(x)i log pθ(x ′)i
= −H(pθ(x))− ∑ i pθ(x)i log pθ(x ′)i
And the TRADES objective function can be rewritten as:
LTRADES(θ) = CE(x,y)− λH(pθ(x))︸ ︷︷ ︸ maximum entropy learning −λ ∑ i pθ(x)i log pθ(x ′)i︸ ︷︷ ︸
adversarial cross-entropy
We can see that the left part corresponds to ME learning, and the right part is a cross-entropy loss
between the distribution of clean and adversarial examples. This finding reveals that TRADES is a special ME learning, as the most direct result, we can see that TRADES model have larger entropy than PGD-AT model on both clean and adversarial examples in Figure 7. Then, we notice that even the entropy value of the TRADES model is still far from the limit entropy value in a 10-classification condition (approximately 2.3), which means that there should be enough space for the PGD-AT and TRADES models to receive a stronger ME regularization. Based on this fact, we proposed the maximum entropy PGD-AT (ME-AT) and maximum entropy TRADES (ME-TRADES).
ME-AT & ME-TRADES. Here, we formulize the objective function of ME-AT and ME-TRADES. For ME-AT, we maximize the entropy of adversarial example distribution and we have the object:
LME−AT (θ) = CE(x′,y)− βH (pθ(x′)) , (β > 0)
For ME-TRADES, we augment the ME hyperparameter of clean example distribution in TRADES, and the object is:
LME−TRADES(θ) = CE(x,y)− (λ+ β)H(pθ(x))− λ ∑ i pθ(x)i log pθ(x ′)i, (β > 0)
and for code simplicity , we realize ME-TRADES in a more concise way as: LME−TRADES(θ) = CE(x,y) + λ ·KL(pθ(x))‖pθ(x′))− βH(pθ(x))
which only needs to add a negative entropy term into the original TRADES code.
3.2.2 EXPERIMENT
Training setting. Our experiments are based on CIFAR10, which is the most popular dataset in AT. We perform the standard CIFAR10 data augmentation: a random 4 pixel crop followed by a random horizontal flip. We train ResNet-18 for 100 epochs using SGD with 0.9 momentum, and the batch size is 64. The initial learning rate is 0.1 and reduced to 0.01 and 0.001 at epochs 75 and 90, respectively. The weight decay is 2× 10−4. We use the 10-step PGD adversary in training, and we set the perturbation size = 0.03125 under the `∞ norm and the step size is fixed to 0.008.
Test setting. To evaluate robustness, we use PGD-20, C&W-20 (Carlini & Wagner, 2017) and AA (Croce & Hein, 2020) to generate adversarial examples at = 0.03125 under the `∞ norm too. We report the test accuracy/robustness of the best checkpoint that achieves the highest robustness under PGD-20 on the test set.
Reduce inter-class similarity & intra-class variance. We first measure the inter-class similarity and intra-class variance of our ME-AT and ME-TRADES models. As shown in Figure 6, we can see that ME can more effectively help reduce both similarity and variance compared with LS. While TRADES is also a ME learning method, this effect of ME can explain why TRADES model have lower variance than PGD-AT model in Figure 2b. Note that the inter-class similarity of the TRADES model is little higher than that in the PGD-AT model in Figure 2a because the adversarial regularization hyperparameter λ = 6 corresponds to a strong adversarial regularization that makes the similarity higher. To clear see the effect of λ in terms of inter-class similarity and intra-class variance, we measured both inter-class similarity and intra-class variance at λ = 1, 6, 12 in TRADES, and when λ = 1, TRADES is in both lower similarity and variance than PGD-AT (see Figure 9 in the Appendix).
Enable larger hyperparameter λ. Then, we evaluate the performance of TRADES and our METRADES, as shown in Table 2. We find that ME-TRADES can adopt a higher adversarial regularization hyperparameter λ than the original TRADES, and we can see that when λ = 6, the robustness of TRADES reaches the maximum (49.95 AA accuracy). Then, the robustness decreases as the λ increases. However, we find that ME-TRADES reaches the maximum robustness when λ = 27, which is much larger than the maximum robustness parameter λ = 6 in the original TRADES, and the maximum robust accuracy is also higher (50.61 AA accuracy). This could be caused by the effect of ME that effectively reduces the inter-class similarity and intra-class variance, which could improve the effective capacity of the DNN model and allow the model to receive a stronger adversarial regularization, then help achieve higher robustness. Therefore, this effect of ME may explain why TRADES usually performs better than PGD-AT on robustness.
Mitigate the robustness-accuracy tradeoff. While the large model capacity has been shown crucial for reducing the robustness-accuracy tradeoff (Madry et al., 2017), we find that this possible improvement in effective capacity from ME can also help mitigate the tradeoff. We show the MEAT results in Table 3. When the ME hyperparameter β = 0.5, ME-AT performs better in terms of both robustness and accuracy than PGD-AT; therefore, ME can effectively mitigate the tradeoff.
Mitigate the robust fairness. In order to check whether ME-AT and ME-TRADES can mitigate the robust fairness problem, we follow Xu et al. (2021)’s setting that uses the worst class performance to measure the fairness, where a higher worst class accuracy/robustness means better fairness. How-
ever, we are not going to compare with the fair robust learning algorithm (FRL) proposed by Xu et al. (2021). FRL is proposed to solve the fairness problem, which can significantly increase the worst class performance; however, it also causes a worse average performance than the previous sota AT method TRADES. In contrast, in this paper, our ME-AT and ME-TRADES are first proposed to mitigate both robustness-accuracy tradeoff and robust fairness, i.e., increase the worst class performance and average performance at the same time, which is a more difficult task than only improving the worst class performance. Therefore, it is not appropriate to compare FRL to our method. As shown in Table 4, both ME and LS can help increase the worst class accuracy and robustness when added to PGD-AT. Because TRADES already contains the ME, we can see that TRADES performs better than PGD-AT with respect to fairness. We show the results of ME-TRADES when β = 1 and λ = 15, 27, which correspond the largest average accuracy and robustness parameters, respectively. ME-TRADES can further improve the worst class accuracy (λ = 15) and robustness (λ = 27) compared to TRADES.
Benchmark the AA leaderboard. To benchmark the robustness on the AA leaderboard, we combine our method with the robust self-training (RST) (Carmon et al., 2019), which adds 500K preprocessed data into training and obtains a value of 60.37 in the AA robustness at = 8/255 on CIFAR10. Training details are provided in Appendix A.3.1.
4 CONCLUSION
In this paper, we corroborate that AT will cause an increase in inter-class similarity, which could be the root of both the robustness-accuracy tradeoff and robust fairness phenomena. We confirm that ME can help reduce the excessive inter-class similarity in robust models, and also provides a lower intra-class variance, which is the remedy of previous AT methods. Our work could provide new insight into understanding and mitigating both the robustness-accuracy tradeoff and robust fairness.
A APPENDIX
A.1 TRAINING DETAILS AND RESULTS OF SINGLE AT EXPERIMENTS
Training details of single AT. Our single AT experiments performed on the ResNet-18 model. A linear layer (512×2) was added before the fc layer in the ResNet-18 model to output 2-D features, and the weight size of the fc layer was adjusted as (2×10) to output the logits. We set bias = None for the fc layer, which means we can estimate the similarity of two classes only from their included angle in the 2-D feature representations figure. For the single adversarial trained class, we perform a PGD-AT (Madry et al., 2017) at perturbation size = 0.03125 under the `∞ norm, and for other classes we perform the standard training. The single AT results of the 1st class car and the 3rd class cat has been shown in Figure 3, the results of the remaining classes are provided in Figure 8 below.
A.2 LARGER λ CAUSES HIGHER INTER-CLASS SIMILARITY
In Figure 9, we can see that with the adversarial regularization hyperparameter λ increases, the inter-class similarity also increases. And because of the effect of ME that reducing the intra-class variance, we also find the intra-class variance will decrease when the hyperparameter λ increases.
A.3 BENCHMARK THE AA LEADERBOARD
A.3.1 ME-RST
Carmon et al. (2019) used the additional pre-processed 500K data to train robust models, which is the RST procedure. To benchmark robustness in the AA (Croce & Hein, 2020) leaderboard, we combine the maximum entropy regularizer with the RST method (the ME-RST). To boost performance, we replaced the batch normalization (BN) layer to the batch-instance normalization (BIN) layer (Nam & Kim, 2018)in the WideResNet-28-10 model, reasons see A.3.2. We only change the λ to 27 (which is the highest robustness parameter in our ResNet-18 experiments) in the original RST settings1, and the ME hyperparameter β is set to be 1. Our results are tested at the perturbation size = 8/255 under the `∞ norm. We compare our method with the top-6 methods on the robustbench website2 which also perform on the WideResNet-28-10 model and use = 8/255 for testing robustness. Evaluation results are shown in Table 5.
A.3.2 ROBUST MODEL MAY NEED TO LEARN MORE SHAPE FEATURES
Inspired by our findings in this paper, we pose an open question: How does DNNs recognize two classes as similar in standard training? Because if we know that, then we can design a more robust model architecture with low inter-class similarity. However, this question is still a hard problem in deep learning, because it actually ask us to answer another question first: What kind of features does DNN models learned?
We noticed that a previous work has studied this question. Geirhos et al. (2018) showed that DNNs could be biased towards the texture feature, however loses the shape feature. This may explain why
1RST’s github https://github.com/yaircarmon/semisup-adv 2The AA leaderboard https://robustbench.github.io/
the class dog and cat are so hard to classify in our binary experiment: dog and cat have the similar texture feature (the hair). Therefore, robust models may need to pay more attention to the shape feature. To learn more shape feature, we attempted to replace BN layer as the BIN layer, which is proposed by Nam & Kim (2018) to balance the shape and texture features, and experimental results indicate that BIN can effectively help improve both accuracy and robustness in Table 5. | 1. What is the focus of the paper regarding adversarial training and its effects on accuracy and robustness?
2. What are the strengths of the proposed approach combining AT and TRADES with smoothing techniques?
3. What are the weaknesses of the paper regarding its limitations and lack of discussion on related works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper investigates why adversarial training can sometimes present a worse trade-off between robustness and accuracy, and found one root cause could be that AT causes a substantial increase in the inter-class similarity. The paper then proposes combining both AT and TRADES with two smoothing techniques, label-smoothing, and maximum entropy, and found the resulting methods yield a better trade-off over natural accuracy and robustness.
Review
Strengths:
The problem is well motivated, and the authors presented a very thorough analysis on the per-class accuracy and inter-class similarity on why AT could lead to a trade-off over accuracy and robustness. The reasoning train from section 1 through section 2 is interesting and well supported by experiments and analysis, and the method in section 3 is well motivated.
The empirical results are strong on CIFAR-10 and demonstrate a better accuracy-robustness trade-off compared to two defense baselines (AT and TRADES).
Weakness:
Currently all the results/analysis are on CIFAR-10 only. More experiment on other datasets would be useful to show the hypothesis (AT hurts inter-class similarity) holds across multiple scenarios.
The authors should add more discussion on related works. A few works have shown that the improved adversarial robustness from label smoothing might be the effect of gradient-masking and thus LS could be volatile to other attacks. [1] Please see the discussion in: https://openreview.net/forum?id=BJlr0j0ctX; and [2] Fu et al. Label Smoothing and Adversarial Robustness. https://arxiv.org/pdf/2009.08233.pdf
Given the current paper claims more on the robustness-accuracy trade-off side, complete adversarial robustness might not be a goal, but it would be great if the authors can add more discussion on this front.
Minor:
Figure 2 b, please pair the colors with class categories;
Section 3.2.1, please add the citation back for ME learning (as in Section 1.2)
Table 5 is informative and puts the results in perspective, consider moving it to main text. |
ICLR | Title
Understanding the robustness-accuracy tradeoff by rethinking robust fairness
Abstract
Although current adversarial training (AT) methods can effectively improve the robustness on adversarial examples, they usually lead to a decrease in accuracy, called the robustness-accuracy trade-off. In addition, researchers have recently discovered a robust fairness phenomenon in the AT model; that is, not all categories of the dataset have experienced a serious decline in accuracy with the introduction of AT methods. In this paper, we explore the relationship between the robustness-accuracy tradeoff and robust fairness for the first time. Empirically, we have found that AT will cause a substantial increase in the inter-class similarity, which could be the root cause of these two phenomena. We argue that the label smoothing (LS) is more than a trick in AT. The smoothness learned from LS can help reduce the excessive inter-class similarity caused by AT, and also reduce the intra-class variance, thereby significantly improving accuracy. Then, we explored the effect of another classic smoothing regularizer, namely, the maximum entropy (ME), and we have found ME can also help reduce both inter-class similarity and intra-class variance. Additionally, we revealed that TRADES actually implies the function of ME, which can explain why TRADES usually performs better than PGD-AT on robustness. Finally, we proposed the maximum entropy PGD-AT (ME-AT) and the maximum entropy TRADES (ME-TRADES), and experimental results show that our methods can significantly mitigate both tradeoff and robust fairness.
1 INTRODUCTION
1.1 BACKGROUND
Deep neural networks (DNNs) have been proven to be vulnerable to the adversarial attacks, as demonstrated in (Szegedy; Goodfellow et al.; Kurakin et al.; Carlini & Wagner). By adding crafted imperceptible perturbations to the input, attackers can easily fool the model to give an incorrect prediction. To defend against adversarial attacks, tens of methods have been proposed, but most of them later proved to be ineffective (Athalye et al., 2018). Among these many defense techniques, adversarial training (AT) (Madry et al., 2017) has been proven to be the most effective strategy against adversarial attacks.
Although current AT algorithms can effectively improve model robustness, there are two confusing phenomena in AT models. First, there can be an inevitable robustness-accuracy tradeoff (Tsipras et al., 2018) in AT models in which increasing robustness is always accompanied by an accuracy drop. Second, recently Xu et al. (2021) found that AT tends to introduce severe disparities in accuracy and robustness between different classes. For example, as shown in Figure 1b, in a PGD-AT model (Madry et al., 2017), both the accuracy and robustness of the 3rd class cat are much lower than those of the 1st class car, while the two classes have similar accuracies in the standard training model (see Figure 1a). This phenomenon is defined as the robust fairness according to the authors.
Additionally, as Xu et al. (2021) mentioned, the robust fairness problem is closely related to the robustness-accuracy tradeoff, because the average accuracy drop in the robustness-accuracy tradeoff could mainly come from the classes that are hard to classify in AT. To verify this, we have measured the accuracy drop for each class, and calculated their percentage in the total accuracy drop, as shown in Figure 1c. We can see that it only takes two classes (the cat and bird) to contribute almost half of
the accuracy drop, while the two classes have the lowest accuracy and robustness than other classes in AT. That is, these hard classified classes have a significantly greater impact on the decline in accuracy, and to better understand the robustness-accuracy tradeoff, it should be determined why these classes are so difficult to classify in AT.
To explain the phenomenon, Xu et al. (2021) argued that some classes are difficult to classify in AT because they are intrinsically “harder” to classify than other classes, and AT tends to hurt both accuracy and robustness of these “hard” classes. To verify this point of view, these authors studied the effect of AT on a binary classification task under a mixture Gaussian distribution, and the “hard” class is the one with larger variance in their case. They showed that AT will push the decision boundary closer to the larger variance class and further worsen both the accuracy and robustness of the class.
However, although they showed that the class with a larger variance is more difficult to classify, there still remains a question; that is, is variance enough to describe the “hard” degree of a class? Imagine two Gaussian distributions both have a high variance, but also an extremely large difference of mean values, they should still be well classified. On the contrary, when the two Gaussian distributions both have a low variance, but the mean values of them are extremely similar, which makes the two distributions severely overlap, we cannot satisfactorily classify them instead. That is, the inter-class similarity is also an important factor affecting the model’s accuracy. With this point in mind, we have measured both inter-class similarity and intra-class variance in standard training, PGD-AT and TRADES models (Zhang et al., 2019) for each class in the CIFAR10 test set, as shown in Figure 2.
The measurement is performed in the penultimate layer feature space. For each class, we use the variance of features as the class’s intra-class variance. To measure the inter-class similarity of each
class, we first calculate the feature mean value vectors for all classes, and then the cosine similarity between the mean value vectors of the measured class and other classes. The largest cosine similarity is used as the inter-class similarity of the measured class in this paper. It is somewhat surprising to see that both PGD-AT and TRADES models have a lower variance than the standard training model in Figure 2b, while they have a worse accuracy instead. However, as shown in Figure 2a, both PGD-AT and TRADES can lead to a higher inter-class similarity than standard training. In particular, we notice that the “hardest” class cat does not have the largest variance no matter in PGD-AT, TRADES or the standard training model, but has the largest inter-class similarity. These observations have challenged Xu et al. (2021)’s theory that the “hard” classes are the large variance classes and indicate that inter-class similarity does matter in AT, thus motivated us to study both robust fairness phenomenon and robustness-accuracy tradeoff toward the increased inter-class similarity.
1.2 OUR CONTRIBUTIONS
Understand the robustness-accuracy tradeoff & robust fairness. To the best of our knowledge, we are the first to study the relationship between the robustness-accuracy tradeoff and the robust fairness, and we find that the two phenomena could both come from the increased inter-class similarity caused by AT. More specifically, through our single AT and binary classification AT experiments in section 2, we find that:
• AT will cause a general increase in inter-class similarity for each class, which even causes a feature overlap, and finally leads to the accuracy drop in the tradeoff.
• The “hard” classes in AT are actually similar classes in standard training, and the increased inter-class similarity in AT makes them more similar and harder to be classified, which causes the robust fairness problem.
Re-investigate the effect of smoothing regularizer in AT. Label smoothing (LS) (Szegedy et al., 2016) has been used as a trick to benchmark the robustness in AT by Pang et al. (2020), however, we noticed that LS can not only help improve the robustness, but usually improve the accuracy too, which means a reduction in the robustness-accuracy tradeoff. In this paper, we find LS can help alleviate the tradeoff because it helps reduce the large inter-class similarity in AT, and also provides a lower intra-class variance. Then, we investigate the effect of the maximum entropy (ME) Pereyra et al. (2017), which is also a classic smoothing regularizer, and we find ME can help reduce both inter-class similarity and intra-class variance too. In addition, we find that the state-of-the-art AT method TRADES can be seen as a special maximum entropy learning, which could explain why TRADES model have a lower intra-class variance than PGD-AT model in Figure 2, and usually performs better than PGD-AT in terms of robustness. We proposed the maximum entropy PGD-AT (ME-AT) and the maximum entropy TRADES (ME-TRADES), and experimental results show that our methods can significantly mitigate both tradeoff and robust fairness.
2 RETHINKING ROBUST FAIRNESS TOWARD INTER-CLASS SIMILARITY
In Figure 2a, we have shown that AT models have higher inter-class similarity than the standard training model. In this section, we design two experiments to see how the high inter-class similarity in AT is related to both the robustness-accuracy tradeoff and robust fairness phenomena.
2.1 SINGLE ADVERSARIAL TRAINING
AT causes a feature overlap. We design the single adversarial training (single AT) to see how AT affect one single class. In single AT, we only conduct adversarial training on one class while training other classes normally. For better visualization, we adjust the penultimate layer of a ResNet-18 model to output 2-D features. In Figure 3, we show the single AT results of the two most representative classes: the “hardest” class cat and the “easiest” class car. The results of other classes and detailed settings are both provided in Appendix A.1. In Figure 3b, when single adversarial train the 3rd class cat, the features of the cat severely overlap with those of the 5th class dog, and the overlapping features make the class cat almost impossible to classify (only has 7.03 natural accuracy and 0 PGD-10 robustness). This observation intuitively shows how the inter-class similarity increases in
AT, and proves that the accuracy drop part in the robustness-accuracy tradeoff could come from the increased inter-class similarity (the overlapping features).
The increase in inter-class similarity is general in AT. However, when single AT is carried out for the 1st class car, the features of the class car can still be split well from other classes, and both the accuracy and PGD-10 robustness of the class car achieve a high level (98.4 and 72.2 respectively, see Figure 3a). Does this mean that the “easy” classes can avoid an increase in inter-class similarity in AT?
To check this, we measure the inter-class similarity in the single AT models, and for comparison, we also measured the inter-class similarity of a standard training 2-D features ResNet-18 model. As shown in Figure 4, each class in the blue line represents the inter-class similarity of the class in the corresponding single AT model (e.g., the point of the class car in the blue line represents the interclass similarity of the class car in the single car AT model), and the yellow line is the inter-class similarity of the standard training model. We can see that even the “easiest” class car in the single AT model can have a higher inter-class similarity than that in the standard training model. This observation turns out that the increase in inter-class similarity is general in AT for all classes.
2.2 BINARY CLASSIFICATION ADVERSARIAL TRAINING
“Hard” classes or similar classes? Since the increase in inter-class similarity is general for all classes, we assume that some classes are difficult to classify in AT possibly because they are already similar in standard training, and the increased inter-class similarity caused by AT makes them more similar and become the “hard” classes. To verify this assumption, we conduct the binary classification AT experiments. We set the class cat to binary classify with other classes in CIFAR10 dataset, and we use both PGD-AT (Madry et al., 2017) and TRADES (Zhang et al., 2019) to train our binary classification ResNet-18 models (512-D features here).
We plot the natural error and PGD-10 error of the PGD-AT and TRADES trained binary classification models in Figure 5a and Figure 5b respectively. Classes in the horizontal axis represent the classes that binary classified with the class cat, and is sorted from small to large by their similarity
with the cat in standard training. We find that both natural error and PGD-10 error in the binary classification PGD-AT and TRADES models are highly positive correlated with the similarity in standard training. For example, the class car is the least similar class of the cat in standard training, when binary classified cat with the car, model can get both low natural error and PGD-10 error (4.6 and 11.0); However, when binary classified cat with the most similar class dog, the ResNet-18 model even failed to converge in PGD-AT (49.7 for both natural and PGD-10 error), and even model can converge in TRADES, it is also in both highest natural error and PGD-10 error (23.4 and 44.0). This observation indicates that the “hard” classes in AT could actually be the similar classes in standard training.
2.3 UNDERSTANDING THE TRADEOFF & ROBUST FAIRNESS
To briefly summarize, through our single AT and binary classification AT experiments, we find the following:
• AT will even cause a feature overlap to the “hard” classes, which leads to a severe accuracy drop.
• The increase in inter-class similarity is general in AT for all classes.
• “Hard” classes in AT may actually be similar classes in standard training for generally increased inter-class similarity.
These findings indicate that the increased inter-class similarity could be the root cause of both robustness-accuracy tradeoff and robust fairness problems, and indicate a new way to mitigate the tradeoff: To obtain better robustness and accuracy, the excessive inter-class similarity in AT should be reduced (while not increasing the intra-class variance). And in the next section, we show this way is promising by the effect of smoothing regularizer.
Our explanation. Finally, we provide an intuitive explanation for why AT leads to higher interclass similarity. The core object of AT is to force adversarial examples under the same distribution as the well-classified clean examples. To achieve this object, TRADES directly minimizes the kldivergence between adversarial and clean examples and minimizes the cross-entropy loss of clean examples to achieve high accuracy. In PGD-AT, this object is implicit. PGD-AT directly minimizes the cross-entropy loss of adversarial examples, and because adversarial examples can be seen as a robust lower bound of clean examples, it actually forces both adversarial examples and clean examples to fit the same one-hot label. However, forcing adversarial examples to fit the distribution of clean examples may also lead clean examples to be closer to the distribution of adversarial examples. While adversarial examples are naturally closer to other classes, the adoption of adversarial examples could pull classes closer, then resulting in higher inter-class similarity.
3 THE ROLE OF THE SMOOTHING REGULARIZER
Based on our findings in section 2, in this section, we argue that LS is not only a trick in AT. We find that smoothness learned from LS can significantly reduce both inter-class similarity and intra-class variance in AT, which is the remedy to current AT methods. Then, we investigate the effect of the ME, which is also a classic smoothing regularizer, and we find TRADES can be seen as a special ME learning. Finally, we proposed the ME-AT and ME-TRADES to mitigate both robustness-accuracy tradeoff and robust fairness.
3.1 LABEL SMOOTHING IS NOT ONLY A TRICK IN AT
LS has been recently used as a trick to benchmark the robustness by Pang et al. (2020). However, we noticed that LS can usually help improve accuracy too, which means a reduction in the robustnessaccuracy tradeoff. In this paper, we find LS can help mitigate the tradeoff for reasons. By visualizing the penultimate layer feature representations of standard models with/without LS in a 2-D figure, Müller et al. (2019) showed that LS encourages the features of training examples from the same class to group in tighter clusters. That is, LS could help reduce the inter-class similarity and inrta-class variance in standard training. To see if LS has the same effect in AT, we measured the inter-class similarity and intra-class variance in the PGD-AT and TRADES models with/without LS. As shown in Figure 6, we find LS can indeed reduce both similarity and variance.
Therefore, we argue that LS is more than a trick in AT. Compared to standard training, LS is more significant in AT for reducing the excessive inter-class similarity of robust models. As a result, in Table 1, we can see that LS can more significantly improve accuracy in both the PGD-AT and TRADES models than in the standard training model. We also measured the robustness under AutoAttack (AA) (Croce & Hein, 2020), which is currently the most effective adversarial attack, and we can see that LS also increases AA accuracy in PGD-AT by 1.12.
However, we find that when adding LS into TRADES, the AA robustness becomes 0.2 lower than the original TRADES model, as shown in Table 1. This could happen because LS will cause a loss of information in the logits, and hence weaken the discriminative power of the trained models (Müller et al., 2019), which could hurt the performance of semi-supervised learning (knowledge distillation in Müller et al. (2019)’s case and TRADES here). This pitfall of LS motivates us to investigate the effect of another classic smoothing regularizer, namely, the ME.
3.2 UNDERSTANDING THE EFFECT OF MAXIMUM ENTROPY
3.2.1 PROPOSED METHOD
ME learning. Let us first take a brief look at the ME learning. Let pθ(x) be the probability distribution of input x produced by a DNN model. The entropy of this conditional distribution is given by:
H(pθ(x)) = − ∑ i pθ(x)i log pθ(x)i
By adding the negative entropy term into the cross-entropy loss during training, the object of ME learning is defined as:
LME(θ) = CE(x,y)− βH (pθ(x)) , (β > 0) where y represents the one-hot labels, and CE is the cross-entropy function:
CE(x,y) = − ∑ i yi log pθ(x)i
TRADES is a special ME learning. Then, we show that the sota AT method TRADES is also a ME learning. Let us recall that the objective function of TRADES is: LTRADES(θ) = CE(x,y) + λ ·KL(pθ(x))‖pθ(x′)) where x′ is the adversarial example, and KL is the kl-divergence, which is given by:
KL(pθ(x))‖pθ(x′)) = ∑ i pθ(x)i log pθ(x)i − ∑ i pθ(x)i log pθ(x ′)i
= −H(pθ(x))− ∑ i pθ(x)i log pθ(x ′)i
And the TRADES objective function can be rewritten as:
LTRADES(θ) = CE(x,y)− λH(pθ(x))︸ ︷︷ ︸ maximum entropy learning −λ ∑ i pθ(x)i log pθ(x ′)i︸ ︷︷ ︸
adversarial cross-entropy
We can see that the left part corresponds to ME learning, and the right part is a cross-entropy loss
between the distribution of clean and adversarial examples. This finding reveals that TRADES is a special ME learning, as the most direct result, we can see that TRADES model have larger entropy than PGD-AT model on both clean and adversarial examples in Figure 7. Then, we notice that even the entropy value of the TRADES model is still far from the limit entropy value in a 10-classification condition (approximately 2.3), which means that there should be enough space for the PGD-AT and TRADES models to receive a stronger ME regularization. Based on this fact, we proposed the maximum entropy PGD-AT (ME-AT) and maximum entropy TRADES (ME-TRADES).
ME-AT & ME-TRADES. Here, we formulize the objective function of ME-AT and ME-TRADES. For ME-AT, we maximize the entropy of adversarial example distribution and we have the object:
LME−AT (θ) = CE(x′,y)− βH (pθ(x′)) , (β > 0)
For ME-TRADES, we augment the ME hyperparameter of clean example distribution in TRADES, and the object is:
LME−TRADES(θ) = CE(x,y)− (λ+ β)H(pθ(x))− λ ∑ i pθ(x)i log pθ(x ′)i, (β > 0)
and for code simplicity , we realize ME-TRADES in a more concise way as: LME−TRADES(θ) = CE(x,y) + λ ·KL(pθ(x))‖pθ(x′))− βH(pθ(x))
which only needs to add a negative entropy term into the original TRADES code.
3.2.2 EXPERIMENT
Training setting. Our experiments are based on CIFAR10, which is the most popular dataset in AT. We perform the standard CIFAR10 data augmentation: a random 4 pixel crop followed by a random horizontal flip. We train ResNet-18 for 100 epochs using SGD with 0.9 momentum, and the batch size is 64. The initial learning rate is 0.1 and reduced to 0.01 and 0.001 at epochs 75 and 90, respectively. The weight decay is 2× 10−4. We use the 10-step PGD adversary in training, and we set the perturbation size = 0.03125 under the `∞ norm and the step size is fixed to 0.008.
Test setting. To evaluate robustness, we use PGD-20, C&W-20 (Carlini & Wagner, 2017) and AA (Croce & Hein, 2020) to generate adversarial examples at = 0.03125 under the `∞ norm too. We report the test accuracy/robustness of the best checkpoint that achieves the highest robustness under PGD-20 on the test set.
Reduce inter-class similarity & intra-class variance. We first measure the inter-class similarity and intra-class variance of our ME-AT and ME-TRADES models. As shown in Figure 6, we can see that ME can more effectively help reduce both similarity and variance compared with LS. While TRADES is also a ME learning method, this effect of ME can explain why TRADES model have lower variance than PGD-AT model in Figure 2b. Note that the inter-class similarity of the TRADES model is little higher than that in the PGD-AT model in Figure 2a because the adversarial regularization hyperparameter λ = 6 corresponds to a strong adversarial regularization that makes the similarity higher. To clear see the effect of λ in terms of inter-class similarity and intra-class variance, we measured both inter-class similarity and intra-class variance at λ = 1, 6, 12 in TRADES, and when λ = 1, TRADES is in both lower similarity and variance than PGD-AT (see Figure 9 in the Appendix).
Enable larger hyperparameter λ. Then, we evaluate the performance of TRADES and our METRADES, as shown in Table 2. We find that ME-TRADES can adopt a higher adversarial regularization hyperparameter λ than the original TRADES, and we can see that when λ = 6, the robustness of TRADES reaches the maximum (49.95 AA accuracy). Then, the robustness decreases as the λ increases. However, we find that ME-TRADES reaches the maximum robustness when λ = 27, which is much larger than the maximum robustness parameter λ = 6 in the original TRADES, and the maximum robust accuracy is also higher (50.61 AA accuracy). This could be caused by the effect of ME that effectively reduces the inter-class similarity and intra-class variance, which could improve the effective capacity of the DNN model and allow the model to receive a stronger adversarial regularization, then help achieve higher robustness. Therefore, this effect of ME may explain why TRADES usually performs better than PGD-AT on robustness.
Mitigate the robustness-accuracy tradeoff. While the large model capacity has been shown crucial for reducing the robustness-accuracy tradeoff (Madry et al., 2017), we find that this possible improvement in effective capacity from ME can also help mitigate the tradeoff. We show the MEAT results in Table 3. When the ME hyperparameter β = 0.5, ME-AT performs better in terms of both robustness and accuracy than PGD-AT; therefore, ME can effectively mitigate the tradeoff.
Mitigate the robust fairness. In order to check whether ME-AT and ME-TRADES can mitigate the robust fairness problem, we follow Xu et al. (2021)’s setting that uses the worst class performance to measure the fairness, where a higher worst class accuracy/robustness means better fairness. How-
ever, we are not going to compare with the fair robust learning algorithm (FRL) proposed by Xu et al. (2021). FRL is proposed to solve the fairness problem, which can significantly increase the worst class performance; however, it also causes a worse average performance than the previous sota AT method TRADES. In contrast, in this paper, our ME-AT and ME-TRADES are first proposed to mitigate both robustness-accuracy tradeoff and robust fairness, i.e., increase the worst class performance and average performance at the same time, which is a more difficult task than only improving the worst class performance. Therefore, it is not appropriate to compare FRL to our method. As shown in Table 4, both ME and LS can help increase the worst class accuracy and robustness when added to PGD-AT. Because TRADES already contains the ME, we can see that TRADES performs better than PGD-AT with respect to fairness. We show the results of ME-TRADES when β = 1 and λ = 15, 27, which correspond the largest average accuracy and robustness parameters, respectively. ME-TRADES can further improve the worst class accuracy (λ = 15) and robustness (λ = 27) compared to TRADES.
Benchmark the AA leaderboard. To benchmark the robustness on the AA leaderboard, we combine our method with the robust self-training (RST) (Carmon et al., 2019), which adds 500K preprocessed data into training and obtains a value of 60.37 in the AA robustness at = 8/255 on CIFAR10. Training details are provided in Appendix A.3.1.
4 CONCLUSION
In this paper, we corroborate that AT will cause an increase in inter-class similarity, which could be the root of both the robustness-accuracy tradeoff and robust fairness phenomena. We confirm that ME can help reduce the excessive inter-class similarity in robust models, and also provides a lower intra-class variance, which is the remedy of previous AT methods. Our work could provide new insight into understanding and mitigating both the robustness-accuracy tradeoff and robust fairness.
A APPENDIX
A.1 TRAINING DETAILS AND RESULTS OF SINGLE AT EXPERIMENTS
Training details of single AT. Our single AT experiments performed on the ResNet-18 model. A linear layer (512×2) was added before the fc layer in the ResNet-18 model to output 2-D features, and the weight size of the fc layer was adjusted as (2×10) to output the logits. We set bias = None for the fc layer, which means we can estimate the similarity of two classes only from their included angle in the 2-D feature representations figure. For the single adversarial trained class, we perform a PGD-AT (Madry et al., 2017) at perturbation size = 0.03125 under the `∞ norm, and for other classes we perform the standard training. The single AT results of the 1st class car and the 3rd class cat has been shown in Figure 3, the results of the remaining classes are provided in Figure 8 below.
A.2 LARGER λ CAUSES HIGHER INTER-CLASS SIMILARITY
In Figure 9, we can see that with the adversarial regularization hyperparameter λ increases, the inter-class similarity also increases. And because of the effect of ME that reducing the intra-class variance, we also find the intra-class variance will decrease when the hyperparameter λ increases.
A.3 BENCHMARK THE AA LEADERBOARD
A.3.1 ME-RST
Carmon et al. (2019) used the additional pre-processed 500K data to train robust models, which is the RST procedure. To benchmark robustness in the AA (Croce & Hein, 2020) leaderboard, we combine the maximum entropy regularizer with the RST method (the ME-RST). To boost performance, we replaced the batch normalization (BN) layer to the batch-instance normalization (BIN) layer (Nam & Kim, 2018)in the WideResNet-28-10 model, reasons see A.3.2. We only change the λ to 27 (which is the highest robustness parameter in our ResNet-18 experiments) in the original RST settings1, and the ME hyperparameter β is set to be 1. Our results are tested at the perturbation size = 8/255 under the `∞ norm. We compare our method with the top-6 methods on the robustbench website2 which also perform on the WideResNet-28-10 model and use = 8/255 for testing robustness. Evaluation results are shown in Table 5.
A.3.2 ROBUST MODEL MAY NEED TO LEARN MORE SHAPE FEATURES
Inspired by our findings in this paper, we pose an open question: How does DNNs recognize two classes as similar in standard training? Because if we know that, then we can design a more robust model architecture with low inter-class similarity. However, this question is still a hard problem in deep learning, because it actually ask us to answer another question first: What kind of features does DNN models learned?
We noticed that a previous work has studied this question. Geirhos et al. (2018) showed that DNNs could be biased towards the texture feature, however loses the shape feature. This may explain why
1RST’s github https://github.com/yaircarmon/semisup-adv 2The AA leaderboard https://robustbench.github.io/
the class dog and cat are so hard to classify in our binary experiment: dog and cat have the similar texture feature (the hair). Therefore, robust models may need to pay more attention to the shape feature. To learn more shape feature, we attempted to replace BN layer as the BIN layer, which is proposed by Nam & Kim (2018) to balance the shape and texture features, and experimental results indicate that BIN can effectively help improve both accuracy and robustness in Table 5. | 1. What is the focus of the paper regarding adversarial training and robust fairness?
2. What are the strengths of the proposed approach, particularly in improving the accuracy-robustness tradeoff and robustness fairness?
3. Do you have any concerns regarding the explanation of the motivation behind the paper's contribution?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. What are the limitations of the paper, especially in comparing its techniques with other recent works on fair robust training? | Summary Of The Paper
Review | Summary Of The Paper
This paper identifies how adversarial training (AT) algorithms for robustness may negatively affect the notion of robust fairness and proposes two methods ME-AT and ME-TRADES, which combine existing AT methods with a maximum entropy (ME) term, to improve the accuracy-robustness tradeoff and robustness fairness. Although AT algorithms improve model robustness, they also increase inter-class similarities, which make certain classes more difficult to classify, leading to unfair accuracies. The paper then shows that label smoothing (LS) mitigates this effect and in particular investigates the ME technique. The authors show that a method called TRADES outperforms another method called PGD-AT because it is a special version of ME. Experiments show that combining ME with these AT methods outperforms PGD-AT.
Review
Improving the fairness of robust training is a timely problem that is being actively studied.
The empirical results on how adversarial training increases inter-class similarities, which reduces robust fairness, is convincing.
The experiments show that ME-AT and ME-TRADES indeed improves the robust-accuracy tradeoff and robust fairness.
Overall the paper uses too many pages on motivating the problem while missing critical content as explained in the following weak points. The explanation on why AT algorithms increase inter-class similarities goes on for 5 pages, but the content is often redundant and can be significantly reduced while being just as convincing. The remaining 4 pages seem too short for the rest of the material.
In the Introduction, the PGT-AT and TRADES techniques appear without much explanation. How do these techniques work and why are they the important ones to consider for fair robust training? Moreover, there is no related work section, and the authors should compare their techniques with the following recent fair robust training works:
Zhang and Davidson, "Towards Fair Deep Anomaly Detection", FAccT 2021.
Khani and Liang, "Removing Spurious Features can Hurt Accuracy and Affect Groups Disproportionately", FAccT 2021.
Section 3.1 is critical where it explains why LS prevents inter-class similarities from going up. However, the explanation is not convincing where it only shows a few experiments on one dataset that is not even well described (is it the dataset mentioned in the Introduction?). As LS is an important technique, it should be explained in the paper instead of just adding a citation. Most importantly, there needs to be some convincing analysis showing why LS reduces similarity and variance for any dataset in general.
The flow from Section 3.1 to 3.2 is not clear. If a few LS techniques work well empirically, why does that lead you to investigate ME? Do all LS techniques work well? Is there no LS technique that works better than ME? What if ME works well for different reasons? Among the ME methods, why only consider the TRADES method? If TRADES is already a special ME technique, how is ME-TRADES an improvement?
The experiments are not extensive enough where only the CIFAR-10 dataset is used. Instead, there must be at least two or three datasets to make sure the results are general. The same comment goes for the experiments in the Introduction. In addition, the authors do not compare their techniques with FRL by Xu et al. because it is said to solve an easier problem and is thus "not appropriate". I am not sure if I agree. FRL is also one of the SOTA methods and has the similar goal of improving robustness against adversarial data while adding constraints to reduce accuracy disparities between groups. IMHO the authors should definitely make an extensive comparison with FRL to emphasize the effectiveness of ME-TRADES.
In Section 3.2.2, please clearly define PGD-20 and the exact measure used to produce the Table 2 values.
The role of intra-class variance is not clear. The inter-class similarity seems to be the main cause of accuracy-robustness tradeoff and robust fairness in the paper. In addition, existing AT methods already decrease the intra-class variance compared to standard training. Hence it is unclear how important intra-class variance is. |
ICLR | Title
Understanding the robustness-accuracy tradeoff by rethinking robust fairness
Abstract
Although current adversarial training (AT) methods can effectively improve the robustness on adversarial examples, they usually lead to a decrease in accuracy, called the robustness-accuracy trade-off. In addition, researchers have recently discovered a robust fairness phenomenon in the AT model; that is, not all categories of the dataset have experienced a serious decline in accuracy with the introduction of AT methods. In this paper, we explore the relationship between the robustness-accuracy tradeoff and robust fairness for the first time. Empirically, we have found that AT will cause a substantial increase in the inter-class similarity, which could be the root cause of these two phenomena. We argue that the label smoothing (LS) is more than a trick in AT. The smoothness learned from LS can help reduce the excessive inter-class similarity caused by AT, and also reduce the intra-class variance, thereby significantly improving accuracy. Then, we explored the effect of another classic smoothing regularizer, namely, the maximum entropy (ME), and we have found ME can also help reduce both inter-class similarity and intra-class variance. Additionally, we revealed that TRADES actually implies the function of ME, which can explain why TRADES usually performs better than PGD-AT on robustness. Finally, we proposed the maximum entropy PGD-AT (ME-AT) and the maximum entropy TRADES (ME-TRADES), and experimental results show that our methods can significantly mitigate both tradeoff and robust fairness.
1 INTRODUCTION
1.1 BACKGROUND
Deep neural networks (DNNs) have been proven to be vulnerable to the adversarial attacks, as demonstrated in (Szegedy; Goodfellow et al.; Kurakin et al.; Carlini & Wagner). By adding crafted imperceptible perturbations to the input, attackers can easily fool the model to give an incorrect prediction. To defend against adversarial attacks, tens of methods have been proposed, but most of them later proved to be ineffective (Athalye et al., 2018). Among these many defense techniques, adversarial training (AT) (Madry et al., 2017) has been proven to be the most effective strategy against adversarial attacks.
Although current AT algorithms can effectively improve model robustness, there are two confusing phenomena in AT models. First, there can be an inevitable robustness-accuracy tradeoff (Tsipras et al., 2018) in AT models in which increasing robustness is always accompanied by an accuracy drop. Second, recently Xu et al. (2021) found that AT tends to introduce severe disparities in accuracy and robustness between different classes. For example, as shown in Figure 1b, in a PGD-AT model (Madry et al., 2017), both the accuracy and robustness of the 3rd class cat are much lower than those of the 1st class car, while the two classes have similar accuracies in the standard training model (see Figure 1a). This phenomenon is defined as the robust fairness according to the authors.
Additionally, as Xu et al. (2021) mentioned, the robust fairness problem is closely related to the robustness-accuracy tradeoff, because the average accuracy drop in the robustness-accuracy tradeoff could mainly come from the classes that are hard to classify in AT. To verify this, we have measured the accuracy drop for each class, and calculated their percentage in the total accuracy drop, as shown in Figure 1c. We can see that it only takes two classes (the cat and bird) to contribute almost half of
the accuracy drop, while the two classes have the lowest accuracy and robustness than other classes in AT. That is, these hard classified classes have a significantly greater impact on the decline in accuracy, and to better understand the robustness-accuracy tradeoff, it should be determined why these classes are so difficult to classify in AT.
To explain the phenomenon, Xu et al. (2021) argued that some classes are difficult to classify in AT because they are intrinsically “harder” to classify than other classes, and AT tends to hurt both accuracy and robustness of these “hard” classes. To verify this point of view, these authors studied the effect of AT on a binary classification task under a mixture Gaussian distribution, and the “hard” class is the one with larger variance in their case. They showed that AT will push the decision boundary closer to the larger variance class and further worsen both the accuracy and robustness of the class.
However, although they showed that the class with a larger variance is more difficult to classify, there still remains a question; that is, is variance enough to describe the “hard” degree of a class? Imagine two Gaussian distributions both have a high variance, but also an extremely large difference of mean values, they should still be well classified. On the contrary, when the two Gaussian distributions both have a low variance, but the mean values of them are extremely similar, which makes the two distributions severely overlap, we cannot satisfactorily classify them instead. That is, the inter-class similarity is also an important factor affecting the model’s accuracy. With this point in mind, we have measured both inter-class similarity and intra-class variance in standard training, PGD-AT and TRADES models (Zhang et al., 2019) for each class in the CIFAR10 test set, as shown in Figure 2.
The measurement is performed in the penultimate layer feature space. For each class, we use the variance of features as the class’s intra-class variance. To measure the inter-class similarity of each
class, we first calculate the feature mean value vectors for all classes, and then the cosine similarity between the mean value vectors of the measured class and other classes. The largest cosine similarity is used as the inter-class similarity of the measured class in this paper. It is somewhat surprising to see that both PGD-AT and TRADES models have a lower variance than the standard training model in Figure 2b, while they have a worse accuracy instead. However, as shown in Figure 2a, both PGD-AT and TRADES can lead to a higher inter-class similarity than standard training. In particular, we notice that the “hardest” class cat does not have the largest variance no matter in PGD-AT, TRADES or the standard training model, but has the largest inter-class similarity. These observations have challenged Xu et al. (2021)’s theory that the “hard” classes are the large variance classes and indicate that inter-class similarity does matter in AT, thus motivated us to study both robust fairness phenomenon and robustness-accuracy tradeoff toward the increased inter-class similarity.
1.2 OUR CONTRIBUTIONS
Understand the robustness-accuracy tradeoff & robust fairness. To the best of our knowledge, we are the first to study the relationship between the robustness-accuracy tradeoff and the robust fairness, and we find that the two phenomena could both come from the increased inter-class similarity caused by AT. More specifically, through our single AT and binary classification AT experiments in section 2, we find that:
• AT will cause a general increase in inter-class similarity for each class, which even causes a feature overlap, and finally leads to the accuracy drop in the tradeoff.
• The “hard” classes in AT are actually similar classes in standard training, and the increased inter-class similarity in AT makes them more similar and harder to be classified, which causes the robust fairness problem.
Re-investigate the effect of smoothing regularizer in AT. Label smoothing (LS) (Szegedy et al., 2016) has been used as a trick to benchmark the robustness in AT by Pang et al. (2020), however, we noticed that LS can not only help improve the robustness, but usually improve the accuracy too, which means a reduction in the robustness-accuracy tradeoff. In this paper, we find LS can help alleviate the tradeoff because it helps reduce the large inter-class similarity in AT, and also provides a lower intra-class variance. Then, we investigate the effect of the maximum entropy (ME) Pereyra et al. (2017), which is also a classic smoothing regularizer, and we find ME can help reduce both inter-class similarity and intra-class variance too. In addition, we find that the state-of-the-art AT method TRADES can be seen as a special maximum entropy learning, which could explain why TRADES model have a lower intra-class variance than PGD-AT model in Figure 2, and usually performs better than PGD-AT in terms of robustness. We proposed the maximum entropy PGD-AT (ME-AT) and the maximum entropy TRADES (ME-TRADES), and experimental results show that our methods can significantly mitigate both tradeoff and robust fairness.
2 RETHINKING ROBUST FAIRNESS TOWARD INTER-CLASS SIMILARITY
In Figure 2a, we have shown that AT models have higher inter-class similarity than the standard training model. In this section, we design two experiments to see how the high inter-class similarity in AT is related to both the robustness-accuracy tradeoff and robust fairness phenomena.
2.1 SINGLE ADVERSARIAL TRAINING
AT causes a feature overlap. We design the single adversarial training (single AT) to see how AT affect one single class. In single AT, we only conduct adversarial training on one class while training other classes normally. For better visualization, we adjust the penultimate layer of a ResNet-18 model to output 2-D features. In Figure 3, we show the single AT results of the two most representative classes: the “hardest” class cat and the “easiest” class car. The results of other classes and detailed settings are both provided in Appendix A.1. In Figure 3b, when single adversarial train the 3rd class cat, the features of the cat severely overlap with those of the 5th class dog, and the overlapping features make the class cat almost impossible to classify (only has 7.03 natural accuracy and 0 PGD-10 robustness). This observation intuitively shows how the inter-class similarity increases in
AT, and proves that the accuracy drop part in the robustness-accuracy tradeoff could come from the increased inter-class similarity (the overlapping features).
The increase in inter-class similarity is general in AT. However, when single AT is carried out for the 1st class car, the features of the class car can still be split well from other classes, and both the accuracy and PGD-10 robustness of the class car achieve a high level (98.4 and 72.2 respectively, see Figure 3a). Does this mean that the “easy” classes can avoid an increase in inter-class similarity in AT?
To check this, we measure the inter-class similarity in the single AT models, and for comparison, we also measured the inter-class similarity of a standard training 2-D features ResNet-18 model. As shown in Figure 4, each class in the blue line represents the inter-class similarity of the class in the corresponding single AT model (e.g., the point of the class car in the blue line represents the interclass similarity of the class car in the single car AT model), and the yellow line is the inter-class similarity of the standard training model. We can see that even the “easiest” class car in the single AT model can have a higher inter-class similarity than that in the standard training model. This observation turns out that the increase in inter-class similarity is general in AT for all classes.
2.2 BINARY CLASSIFICATION ADVERSARIAL TRAINING
“Hard” classes or similar classes? Since the increase in inter-class similarity is general for all classes, we assume that some classes are difficult to classify in AT possibly because they are already similar in standard training, and the increased inter-class similarity caused by AT makes them more similar and become the “hard” classes. To verify this assumption, we conduct the binary classification AT experiments. We set the class cat to binary classify with other classes in CIFAR10 dataset, and we use both PGD-AT (Madry et al., 2017) and TRADES (Zhang et al., 2019) to train our binary classification ResNet-18 models (512-D features here).
We plot the natural error and PGD-10 error of the PGD-AT and TRADES trained binary classification models in Figure 5a and Figure 5b respectively. Classes in the horizontal axis represent the classes that binary classified with the class cat, and is sorted from small to large by their similarity
with the cat in standard training. We find that both natural error and PGD-10 error in the binary classification PGD-AT and TRADES models are highly positive correlated with the similarity in standard training. For example, the class car is the least similar class of the cat in standard training, when binary classified cat with the car, model can get both low natural error and PGD-10 error (4.6 and 11.0); However, when binary classified cat with the most similar class dog, the ResNet-18 model even failed to converge in PGD-AT (49.7 for both natural and PGD-10 error), and even model can converge in TRADES, it is also in both highest natural error and PGD-10 error (23.4 and 44.0). This observation indicates that the “hard” classes in AT could actually be the similar classes in standard training.
2.3 UNDERSTANDING THE TRADEOFF & ROBUST FAIRNESS
To briefly summarize, through our single AT and binary classification AT experiments, we find the following:
• AT will even cause a feature overlap to the “hard” classes, which leads to a severe accuracy drop.
• The increase in inter-class similarity is general in AT for all classes.
• “Hard” classes in AT may actually be similar classes in standard training for generally increased inter-class similarity.
These findings indicate that the increased inter-class similarity could be the root cause of both robustness-accuracy tradeoff and robust fairness problems, and indicate a new way to mitigate the tradeoff: To obtain better robustness and accuracy, the excessive inter-class similarity in AT should be reduced (while not increasing the intra-class variance). And in the next section, we show this way is promising by the effect of smoothing regularizer.
Our explanation. Finally, we provide an intuitive explanation for why AT leads to higher interclass similarity. The core object of AT is to force adversarial examples under the same distribution as the well-classified clean examples. To achieve this object, TRADES directly minimizes the kldivergence between adversarial and clean examples and minimizes the cross-entropy loss of clean examples to achieve high accuracy. In PGD-AT, this object is implicit. PGD-AT directly minimizes the cross-entropy loss of adversarial examples, and because adversarial examples can be seen as a robust lower bound of clean examples, it actually forces both adversarial examples and clean examples to fit the same one-hot label. However, forcing adversarial examples to fit the distribution of clean examples may also lead clean examples to be closer to the distribution of adversarial examples. While adversarial examples are naturally closer to other classes, the adoption of adversarial examples could pull classes closer, then resulting in higher inter-class similarity.
3 THE ROLE OF THE SMOOTHING REGULARIZER
Based on our findings in section 2, in this section, we argue that LS is not only a trick in AT. We find that smoothness learned from LS can significantly reduce both inter-class similarity and intra-class variance in AT, which is the remedy to current AT methods. Then, we investigate the effect of the ME, which is also a classic smoothing regularizer, and we find TRADES can be seen as a special ME learning. Finally, we proposed the ME-AT and ME-TRADES to mitigate both robustness-accuracy tradeoff and robust fairness.
3.1 LABEL SMOOTHING IS NOT ONLY A TRICK IN AT
LS has been recently used as a trick to benchmark the robustness by Pang et al. (2020). However, we noticed that LS can usually help improve accuracy too, which means a reduction in the robustnessaccuracy tradeoff. In this paper, we find LS can help mitigate the tradeoff for reasons. By visualizing the penultimate layer feature representations of standard models with/without LS in a 2-D figure, Müller et al. (2019) showed that LS encourages the features of training examples from the same class to group in tighter clusters. That is, LS could help reduce the inter-class similarity and inrta-class variance in standard training. To see if LS has the same effect in AT, we measured the inter-class similarity and intra-class variance in the PGD-AT and TRADES models with/without LS. As shown in Figure 6, we find LS can indeed reduce both similarity and variance.
Therefore, we argue that LS is more than a trick in AT. Compared to standard training, LS is more significant in AT for reducing the excessive inter-class similarity of robust models. As a result, in Table 1, we can see that LS can more significantly improve accuracy in both the PGD-AT and TRADES models than in the standard training model. We also measured the robustness under AutoAttack (AA) (Croce & Hein, 2020), which is currently the most effective adversarial attack, and we can see that LS also increases AA accuracy in PGD-AT by 1.12.
However, we find that when adding LS into TRADES, the AA robustness becomes 0.2 lower than the original TRADES model, as shown in Table 1. This could happen because LS will cause a loss of information in the logits, and hence weaken the discriminative power of the trained models (Müller et al., 2019), which could hurt the performance of semi-supervised learning (knowledge distillation in Müller et al. (2019)’s case and TRADES here). This pitfall of LS motivates us to investigate the effect of another classic smoothing regularizer, namely, the ME.
3.2 UNDERSTANDING THE EFFECT OF MAXIMUM ENTROPY
3.2.1 PROPOSED METHOD
ME learning. Let us first take a brief look at the ME learning. Let pθ(x) be the probability distribution of input x produced by a DNN model. The entropy of this conditional distribution is given by:
H(pθ(x)) = − ∑ i pθ(x)i log pθ(x)i
By adding the negative entropy term into the cross-entropy loss during training, the object of ME learning is defined as:
LME(θ) = CE(x,y)− βH (pθ(x)) , (β > 0) where y represents the one-hot labels, and CE is the cross-entropy function:
CE(x,y) = − ∑ i yi log pθ(x)i
TRADES is a special ME learning. Then, we show that the sota AT method TRADES is also a ME learning. Let us recall that the objective function of TRADES is: LTRADES(θ) = CE(x,y) + λ ·KL(pθ(x))‖pθ(x′)) where x′ is the adversarial example, and KL is the kl-divergence, which is given by:
KL(pθ(x))‖pθ(x′)) = ∑ i pθ(x)i log pθ(x)i − ∑ i pθ(x)i log pθ(x ′)i
= −H(pθ(x))− ∑ i pθ(x)i log pθ(x ′)i
And the TRADES objective function can be rewritten as:
LTRADES(θ) = CE(x,y)− λH(pθ(x))︸ ︷︷ ︸ maximum entropy learning −λ ∑ i pθ(x)i log pθ(x ′)i︸ ︷︷ ︸
adversarial cross-entropy
We can see that the left part corresponds to ME learning, and the right part is a cross-entropy loss
between the distribution of clean and adversarial examples. This finding reveals that TRADES is a special ME learning, as the most direct result, we can see that TRADES model have larger entropy than PGD-AT model on both clean and adversarial examples in Figure 7. Then, we notice that even the entropy value of the TRADES model is still far from the limit entropy value in a 10-classification condition (approximately 2.3), which means that there should be enough space for the PGD-AT and TRADES models to receive a stronger ME regularization. Based on this fact, we proposed the maximum entropy PGD-AT (ME-AT) and maximum entropy TRADES (ME-TRADES).
ME-AT & ME-TRADES. Here, we formulize the objective function of ME-AT and ME-TRADES. For ME-AT, we maximize the entropy of adversarial example distribution and we have the object:
LME−AT (θ) = CE(x′,y)− βH (pθ(x′)) , (β > 0)
For ME-TRADES, we augment the ME hyperparameter of clean example distribution in TRADES, and the object is:
LME−TRADES(θ) = CE(x,y)− (λ+ β)H(pθ(x))− λ ∑ i pθ(x)i log pθ(x ′)i, (β > 0)
and for code simplicity , we realize ME-TRADES in a more concise way as: LME−TRADES(θ) = CE(x,y) + λ ·KL(pθ(x))‖pθ(x′))− βH(pθ(x))
which only needs to add a negative entropy term into the original TRADES code.
3.2.2 EXPERIMENT
Training setting. Our experiments are based on CIFAR10, which is the most popular dataset in AT. We perform the standard CIFAR10 data augmentation: a random 4 pixel crop followed by a random horizontal flip. We train ResNet-18 for 100 epochs using SGD with 0.9 momentum, and the batch size is 64. The initial learning rate is 0.1 and reduced to 0.01 and 0.001 at epochs 75 and 90, respectively. The weight decay is 2× 10−4. We use the 10-step PGD adversary in training, and we set the perturbation size = 0.03125 under the `∞ norm and the step size is fixed to 0.008.
Test setting. To evaluate robustness, we use PGD-20, C&W-20 (Carlini & Wagner, 2017) and AA (Croce & Hein, 2020) to generate adversarial examples at = 0.03125 under the `∞ norm too. We report the test accuracy/robustness of the best checkpoint that achieves the highest robustness under PGD-20 on the test set.
Reduce inter-class similarity & intra-class variance. We first measure the inter-class similarity and intra-class variance of our ME-AT and ME-TRADES models. As shown in Figure 6, we can see that ME can more effectively help reduce both similarity and variance compared with LS. While TRADES is also a ME learning method, this effect of ME can explain why TRADES model have lower variance than PGD-AT model in Figure 2b. Note that the inter-class similarity of the TRADES model is little higher than that in the PGD-AT model in Figure 2a because the adversarial regularization hyperparameter λ = 6 corresponds to a strong adversarial regularization that makes the similarity higher. To clear see the effect of λ in terms of inter-class similarity and intra-class variance, we measured both inter-class similarity and intra-class variance at λ = 1, 6, 12 in TRADES, and when λ = 1, TRADES is in both lower similarity and variance than PGD-AT (see Figure 9 in the Appendix).
Enable larger hyperparameter λ. Then, we evaluate the performance of TRADES and our METRADES, as shown in Table 2. We find that ME-TRADES can adopt a higher adversarial regularization hyperparameter λ than the original TRADES, and we can see that when λ = 6, the robustness of TRADES reaches the maximum (49.95 AA accuracy). Then, the robustness decreases as the λ increases. However, we find that ME-TRADES reaches the maximum robustness when λ = 27, which is much larger than the maximum robustness parameter λ = 6 in the original TRADES, and the maximum robust accuracy is also higher (50.61 AA accuracy). This could be caused by the effect of ME that effectively reduces the inter-class similarity and intra-class variance, which could improve the effective capacity of the DNN model and allow the model to receive a stronger adversarial regularization, then help achieve higher robustness. Therefore, this effect of ME may explain why TRADES usually performs better than PGD-AT on robustness.
Mitigate the robustness-accuracy tradeoff. While the large model capacity has been shown crucial for reducing the robustness-accuracy tradeoff (Madry et al., 2017), we find that this possible improvement in effective capacity from ME can also help mitigate the tradeoff. We show the MEAT results in Table 3. When the ME hyperparameter β = 0.5, ME-AT performs better in terms of both robustness and accuracy than PGD-AT; therefore, ME can effectively mitigate the tradeoff.
Mitigate the robust fairness. In order to check whether ME-AT and ME-TRADES can mitigate the robust fairness problem, we follow Xu et al. (2021)’s setting that uses the worst class performance to measure the fairness, where a higher worst class accuracy/robustness means better fairness. How-
ever, we are not going to compare with the fair robust learning algorithm (FRL) proposed by Xu et al. (2021). FRL is proposed to solve the fairness problem, which can significantly increase the worst class performance; however, it also causes a worse average performance than the previous sota AT method TRADES. In contrast, in this paper, our ME-AT and ME-TRADES are first proposed to mitigate both robustness-accuracy tradeoff and robust fairness, i.e., increase the worst class performance and average performance at the same time, which is a more difficult task than only improving the worst class performance. Therefore, it is not appropriate to compare FRL to our method. As shown in Table 4, both ME and LS can help increase the worst class accuracy and robustness when added to PGD-AT. Because TRADES already contains the ME, we can see that TRADES performs better than PGD-AT with respect to fairness. We show the results of ME-TRADES when β = 1 and λ = 15, 27, which correspond the largest average accuracy and robustness parameters, respectively. ME-TRADES can further improve the worst class accuracy (λ = 15) and robustness (λ = 27) compared to TRADES.
Benchmark the AA leaderboard. To benchmark the robustness on the AA leaderboard, we combine our method with the robust self-training (RST) (Carmon et al., 2019), which adds 500K preprocessed data into training and obtains a value of 60.37 in the AA robustness at = 8/255 on CIFAR10. Training details are provided in Appendix A.3.1.
4 CONCLUSION
In this paper, we corroborate that AT will cause an increase in inter-class similarity, which could be the root of both the robustness-accuracy tradeoff and robust fairness phenomena. We confirm that ME can help reduce the excessive inter-class similarity in robust models, and also provides a lower intra-class variance, which is the remedy of previous AT methods. Our work could provide new insight into understanding and mitigating both the robustness-accuracy tradeoff and robust fairness.
A APPENDIX
A.1 TRAINING DETAILS AND RESULTS OF SINGLE AT EXPERIMENTS
Training details of single AT. Our single AT experiments performed on the ResNet-18 model. A linear layer (512×2) was added before the fc layer in the ResNet-18 model to output 2-D features, and the weight size of the fc layer was adjusted as (2×10) to output the logits. We set bias = None for the fc layer, which means we can estimate the similarity of two classes only from their included angle in the 2-D feature representations figure. For the single adversarial trained class, we perform a PGD-AT (Madry et al., 2017) at perturbation size = 0.03125 under the `∞ norm, and for other classes we perform the standard training. The single AT results of the 1st class car and the 3rd class cat has been shown in Figure 3, the results of the remaining classes are provided in Figure 8 below.
A.2 LARGER λ CAUSES HIGHER INTER-CLASS SIMILARITY
In Figure 9, we can see that with the adversarial regularization hyperparameter λ increases, the inter-class similarity also increases. And because of the effect of ME that reducing the intra-class variance, we also find the intra-class variance will decrease when the hyperparameter λ increases.
A.3 BENCHMARK THE AA LEADERBOARD
A.3.1 ME-RST
Carmon et al. (2019) used the additional pre-processed 500K data to train robust models, which is the RST procedure. To benchmark robustness in the AA (Croce & Hein, 2020) leaderboard, we combine the maximum entropy regularizer with the RST method (the ME-RST). To boost performance, we replaced the batch normalization (BN) layer to the batch-instance normalization (BIN) layer (Nam & Kim, 2018)in the WideResNet-28-10 model, reasons see A.3.2. We only change the λ to 27 (which is the highest robustness parameter in our ResNet-18 experiments) in the original RST settings1, and the ME hyperparameter β is set to be 1. Our results are tested at the perturbation size = 8/255 under the `∞ norm. We compare our method with the top-6 methods on the robustbench website2 which also perform on the WideResNet-28-10 model and use = 8/255 for testing robustness. Evaluation results are shown in Table 5.
A.3.2 ROBUST MODEL MAY NEED TO LEARN MORE SHAPE FEATURES
Inspired by our findings in this paper, we pose an open question: How does DNNs recognize two classes as similar in standard training? Because if we know that, then we can design a more robust model architecture with low inter-class similarity. However, this question is still a hard problem in deep learning, because it actually ask us to answer another question first: What kind of features does DNN models learned?
We noticed that a previous work has studied this question. Geirhos et al. (2018) showed that DNNs could be biased towards the texture feature, however loses the shape feature. This may explain why
1RST’s github https://github.com/yaircarmon/semisup-adv 2The AA leaderboard https://robustbench.github.io/
the class dog and cat are so hard to classify in our binary experiment: dog and cat have the similar texture feature (the hair). Therefore, robust models may need to pay more attention to the shape feature. To learn more shape feature, we attempted to replace BN layer as the BIN layer, which is proposed by Nam & Kim (2018) to balance the shape and texture features, and experimental results indicate that BIN can effectively help improve both accuracy and robustness in Table 5. | 1. What is the main contribution of the paper regarding the correlation between drop robust accuracy and inter-class similarity?
2. What are the strengths and weaknesses of the proposed approach, particularly in its connection to the fairness literature?
3. Do you have any concerns about the organization and technical aspects of the paper?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any other questions or suggestions for improving the paper? | Summary Of The Paper
Review | Summary Of The Paper
Motivated by the empirical observations, the authors argue that there is a high correlation between the drop of robust accuracy, robust fairness and inter-class similarity. The authors then proposed to augment these two existing methods with the corresponding label smoothing and the maximum-entropy regularizer.
Review
The paper starts with an empirical observation that large inter-class similarity highly correlates with the drop in accuracy as well as the accuracy parity between different classes. This observation is quite natural, since if the features of two classes are class to each other, then it makes it harder to classify for any predictors based on these features, leading to dropped accuracy of these classes. Motivated by this empirical observation, the authors proposed to use label smoothing (LS) to decrease the inter-class similarity, in order to alleviate the robustness-accuracy tradeoff problem.
Besides the label smoothing, the authors also argued that using the maximum entropy regularizer could help reduce the inter-class similarity, and then proposed two variants of existing works, i.e., adversarial training and TRADES, with the additional maximum entropy regularizer.
Overall the paper is quite clear and easy to follow. However, the current manuscript does not have a related work section and indeed fails to discuss a line of works in the fairness literature. In particular, the problem studied in this work is closely related to the accuracy parity problem in the fairness literature, yet there is no discussion in this line of works in the related works. Some of them include [1-3] and the references therein.
[1]. Demonstrating accuracy equity and predictive parity performance of the compas risk scales in broward county.
[2]. Understanding and Mitigating Accuracy Disparity in Regression
[3]. Fair regression: Quantitative definitions and reduction based algorithms.
The organization of the paper, however, could be improved. For example, Section 3 directly jumps to the discussion of label smoothing without a proper introduction on what is LS. The same also applies to the maximum entropy regularization as well.
Technically, although the empirical observation is fine, I am interested in understanding whether this phenomenon could generalize to other datasets, or is this specific to the one studied in this paper. To answer this question, it would be good to have some theoretical justification to link the relationship between robust accuracy and inter-class similarity. This is quite important, as it means the empirical observation so far is not just a coincidence but holds in general.
I would suggest the authors soften the use of the word "proven" in the description of the related works in the first paragraph of the Introduction. Technically, none of the existing methods has been "proven" to be the best but verified empirically to be effective. To me, claiming these empirical results to be proven is quite misleading.
Other questions:
In Fig. 2, I would image the features from the penultimate layer to be a high-dimensional vector, so the corresponding measure should be a covariance matrix rather than a scalar variance. What's the specific measure used in Fig. 2(a)?
Minor:
In the caption of Fig. 1, better to explain what's PGD-10 error, since this is not a standard term. |
ICLR | Title
Understanding the robustness-accuracy tradeoff by rethinking robust fairness
Abstract
Although current adversarial training (AT) methods can effectively improve the robustness on adversarial examples, they usually lead to a decrease in accuracy, called the robustness-accuracy trade-off. In addition, researchers have recently discovered a robust fairness phenomenon in the AT model; that is, not all categories of the dataset have experienced a serious decline in accuracy with the introduction of AT methods. In this paper, we explore the relationship between the robustness-accuracy tradeoff and robust fairness for the first time. Empirically, we have found that AT will cause a substantial increase in the inter-class similarity, which could be the root cause of these two phenomena. We argue that the label smoothing (LS) is more than a trick in AT. The smoothness learned from LS can help reduce the excessive inter-class similarity caused by AT, and also reduce the intra-class variance, thereby significantly improving accuracy. Then, we explored the effect of another classic smoothing regularizer, namely, the maximum entropy (ME), and we have found ME can also help reduce both inter-class similarity and intra-class variance. Additionally, we revealed that TRADES actually implies the function of ME, which can explain why TRADES usually performs better than PGD-AT on robustness. Finally, we proposed the maximum entropy PGD-AT (ME-AT) and the maximum entropy TRADES (ME-TRADES), and experimental results show that our methods can significantly mitigate both tradeoff and robust fairness.
1 INTRODUCTION
1.1 BACKGROUND
Deep neural networks (DNNs) have been proven to be vulnerable to the adversarial attacks, as demonstrated in (Szegedy; Goodfellow et al.; Kurakin et al.; Carlini & Wagner). By adding crafted imperceptible perturbations to the input, attackers can easily fool the model to give an incorrect prediction. To defend against adversarial attacks, tens of methods have been proposed, but most of them later proved to be ineffective (Athalye et al., 2018). Among these many defense techniques, adversarial training (AT) (Madry et al., 2017) has been proven to be the most effective strategy against adversarial attacks.
Although current AT algorithms can effectively improve model robustness, there are two confusing phenomena in AT models. First, there can be an inevitable robustness-accuracy tradeoff (Tsipras et al., 2018) in AT models in which increasing robustness is always accompanied by an accuracy drop. Second, recently Xu et al. (2021) found that AT tends to introduce severe disparities in accuracy and robustness between different classes. For example, as shown in Figure 1b, in a PGD-AT model (Madry et al., 2017), both the accuracy and robustness of the 3rd class cat are much lower than those of the 1st class car, while the two classes have similar accuracies in the standard training model (see Figure 1a). This phenomenon is defined as the robust fairness according to the authors.
Additionally, as Xu et al. (2021) mentioned, the robust fairness problem is closely related to the robustness-accuracy tradeoff, because the average accuracy drop in the robustness-accuracy tradeoff could mainly come from the classes that are hard to classify in AT. To verify this, we have measured the accuracy drop for each class, and calculated their percentage in the total accuracy drop, as shown in Figure 1c. We can see that it only takes two classes (the cat and bird) to contribute almost half of
the accuracy drop, while the two classes have the lowest accuracy and robustness than other classes in AT. That is, these hard classified classes have a significantly greater impact on the decline in accuracy, and to better understand the robustness-accuracy tradeoff, it should be determined why these classes are so difficult to classify in AT.
To explain the phenomenon, Xu et al. (2021) argued that some classes are difficult to classify in AT because they are intrinsically “harder” to classify than other classes, and AT tends to hurt both accuracy and robustness of these “hard” classes. To verify this point of view, these authors studied the effect of AT on a binary classification task under a mixture Gaussian distribution, and the “hard” class is the one with larger variance in their case. They showed that AT will push the decision boundary closer to the larger variance class and further worsen both the accuracy and robustness of the class.
However, although they showed that the class with a larger variance is more difficult to classify, there still remains a question; that is, is variance enough to describe the “hard” degree of a class? Imagine two Gaussian distributions both have a high variance, but also an extremely large difference of mean values, they should still be well classified. On the contrary, when the two Gaussian distributions both have a low variance, but the mean values of them are extremely similar, which makes the two distributions severely overlap, we cannot satisfactorily classify them instead. That is, the inter-class similarity is also an important factor affecting the model’s accuracy. With this point in mind, we have measured both inter-class similarity and intra-class variance in standard training, PGD-AT and TRADES models (Zhang et al., 2019) for each class in the CIFAR10 test set, as shown in Figure 2.
The measurement is performed in the penultimate layer feature space. For each class, we use the variance of features as the class’s intra-class variance. To measure the inter-class similarity of each
class, we first calculate the feature mean value vectors for all classes, and then the cosine similarity between the mean value vectors of the measured class and other classes. The largest cosine similarity is used as the inter-class similarity of the measured class in this paper. It is somewhat surprising to see that both PGD-AT and TRADES models have a lower variance than the standard training model in Figure 2b, while they have a worse accuracy instead. However, as shown in Figure 2a, both PGD-AT and TRADES can lead to a higher inter-class similarity than standard training. In particular, we notice that the “hardest” class cat does not have the largest variance no matter in PGD-AT, TRADES or the standard training model, but has the largest inter-class similarity. These observations have challenged Xu et al. (2021)’s theory that the “hard” classes are the large variance classes and indicate that inter-class similarity does matter in AT, thus motivated us to study both robust fairness phenomenon and robustness-accuracy tradeoff toward the increased inter-class similarity.
1.2 OUR CONTRIBUTIONS
Understand the robustness-accuracy tradeoff & robust fairness. To the best of our knowledge, we are the first to study the relationship between the robustness-accuracy tradeoff and the robust fairness, and we find that the two phenomena could both come from the increased inter-class similarity caused by AT. More specifically, through our single AT and binary classification AT experiments in section 2, we find that:
• AT will cause a general increase in inter-class similarity for each class, which even causes a feature overlap, and finally leads to the accuracy drop in the tradeoff.
• The “hard” classes in AT are actually similar classes in standard training, and the increased inter-class similarity in AT makes them more similar and harder to be classified, which causes the robust fairness problem.
Re-investigate the effect of smoothing regularizer in AT. Label smoothing (LS) (Szegedy et al., 2016) has been used as a trick to benchmark the robustness in AT by Pang et al. (2020), however, we noticed that LS can not only help improve the robustness, but usually improve the accuracy too, which means a reduction in the robustness-accuracy tradeoff. In this paper, we find LS can help alleviate the tradeoff because it helps reduce the large inter-class similarity in AT, and also provides a lower intra-class variance. Then, we investigate the effect of the maximum entropy (ME) Pereyra et al. (2017), which is also a classic smoothing regularizer, and we find ME can help reduce both inter-class similarity and intra-class variance too. In addition, we find that the state-of-the-art AT method TRADES can be seen as a special maximum entropy learning, which could explain why TRADES model have a lower intra-class variance than PGD-AT model in Figure 2, and usually performs better than PGD-AT in terms of robustness. We proposed the maximum entropy PGD-AT (ME-AT) and the maximum entropy TRADES (ME-TRADES), and experimental results show that our methods can significantly mitigate both tradeoff and robust fairness.
2 RETHINKING ROBUST FAIRNESS TOWARD INTER-CLASS SIMILARITY
In Figure 2a, we have shown that AT models have higher inter-class similarity than the standard training model. In this section, we design two experiments to see how the high inter-class similarity in AT is related to both the robustness-accuracy tradeoff and robust fairness phenomena.
2.1 SINGLE ADVERSARIAL TRAINING
AT causes a feature overlap. We design the single adversarial training (single AT) to see how AT affect one single class. In single AT, we only conduct adversarial training on one class while training other classes normally. For better visualization, we adjust the penultimate layer of a ResNet-18 model to output 2-D features. In Figure 3, we show the single AT results of the two most representative classes: the “hardest” class cat and the “easiest” class car. The results of other classes and detailed settings are both provided in Appendix A.1. In Figure 3b, when single adversarial train the 3rd class cat, the features of the cat severely overlap with those of the 5th class dog, and the overlapping features make the class cat almost impossible to classify (only has 7.03 natural accuracy and 0 PGD-10 robustness). This observation intuitively shows how the inter-class similarity increases in
AT, and proves that the accuracy drop part in the robustness-accuracy tradeoff could come from the increased inter-class similarity (the overlapping features).
The increase in inter-class similarity is general in AT. However, when single AT is carried out for the 1st class car, the features of the class car can still be split well from other classes, and both the accuracy and PGD-10 robustness of the class car achieve a high level (98.4 and 72.2 respectively, see Figure 3a). Does this mean that the “easy” classes can avoid an increase in inter-class similarity in AT?
To check this, we measure the inter-class similarity in the single AT models, and for comparison, we also measured the inter-class similarity of a standard training 2-D features ResNet-18 model. As shown in Figure 4, each class in the blue line represents the inter-class similarity of the class in the corresponding single AT model (e.g., the point of the class car in the blue line represents the interclass similarity of the class car in the single car AT model), and the yellow line is the inter-class similarity of the standard training model. We can see that even the “easiest” class car in the single AT model can have a higher inter-class similarity than that in the standard training model. This observation turns out that the increase in inter-class similarity is general in AT for all classes.
2.2 BINARY CLASSIFICATION ADVERSARIAL TRAINING
“Hard” classes or similar classes? Since the increase in inter-class similarity is general for all classes, we assume that some classes are difficult to classify in AT possibly because they are already similar in standard training, and the increased inter-class similarity caused by AT makes them more similar and become the “hard” classes. To verify this assumption, we conduct the binary classification AT experiments. We set the class cat to binary classify with other classes in CIFAR10 dataset, and we use both PGD-AT (Madry et al., 2017) and TRADES (Zhang et al., 2019) to train our binary classification ResNet-18 models (512-D features here).
We plot the natural error and PGD-10 error of the PGD-AT and TRADES trained binary classification models in Figure 5a and Figure 5b respectively. Classes in the horizontal axis represent the classes that binary classified with the class cat, and is sorted from small to large by their similarity
with the cat in standard training. We find that both natural error and PGD-10 error in the binary classification PGD-AT and TRADES models are highly positive correlated with the similarity in standard training. For example, the class car is the least similar class of the cat in standard training, when binary classified cat with the car, model can get both low natural error and PGD-10 error (4.6 and 11.0); However, when binary classified cat with the most similar class dog, the ResNet-18 model even failed to converge in PGD-AT (49.7 for both natural and PGD-10 error), and even model can converge in TRADES, it is also in both highest natural error and PGD-10 error (23.4 and 44.0). This observation indicates that the “hard” classes in AT could actually be the similar classes in standard training.
2.3 UNDERSTANDING THE TRADEOFF & ROBUST FAIRNESS
To briefly summarize, through our single AT and binary classification AT experiments, we find the following:
• AT will even cause a feature overlap to the “hard” classes, which leads to a severe accuracy drop.
• The increase in inter-class similarity is general in AT for all classes.
• “Hard” classes in AT may actually be similar classes in standard training for generally increased inter-class similarity.
These findings indicate that the increased inter-class similarity could be the root cause of both robustness-accuracy tradeoff and robust fairness problems, and indicate a new way to mitigate the tradeoff: To obtain better robustness and accuracy, the excessive inter-class similarity in AT should be reduced (while not increasing the intra-class variance). And in the next section, we show this way is promising by the effect of smoothing regularizer.
Our explanation. Finally, we provide an intuitive explanation for why AT leads to higher interclass similarity. The core object of AT is to force adversarial examples under the same distribution as the well-classified clean examples. To achieve this object, TRADES directly minimizes the kldivergence between adversarial and clean examples and minimizes the cross-entropy loss of clean examples to achieve high accuracy. In PGD-AT, this object is implicit. PGD-AT directly minimizes the cross-entropy loss of adversarial examples, and because adversarial examples can be seen as a robust lower bound of clean examples, it actually forces both adversarial examples and clean examples to fit the same one-hot label. However, forcing adversarial examples to fit the distribution of clean examples may also lead clean examples to be closer to the distribution of adversarial examples. While adversarial examples are naturally closer to other classes, the adoption of adversarial examples could pull classes closer, then resulting in higher inter-class similarity.
3 THE ROLE OF THE SMOOTHING REGULARIZER
Based on our findings in section 2, in this section, we argue that LS is not only a trick in AT. We find that smoothness learned from LS can significantly reduce both inter-class similarity and intra-class variance in AT, which is the remedy to current AT methods. Then, we investigate the effect of the ME, which is also a classic smoothing regularizer, and we find TRADES can be seen as a special ME learning. Finally, we proposed the ME-AT and ME-TRADES to mitigate both robustness-accuracy tradeoff and robust fairness.
3.1 LABEL SMOOTHING IS NOT ONLY A TRICK IN AT
LS has been recently used as a trick to benchmark the robustness by Pang et al. (2020). However, we noticed that LS can usually help improve accuracy too, which means a reduction in the robustnessaccuracy tradeoff. In this paper, we find LS can help mitigate the tradeoff for reasons. By visualizing the penultimate layer feature representations of standard models with/without LS in a 2-D figure, Müller et al. (2019) showed that LS encourages the features of training examples from the same class to group in tighter clusters. That is, LS could help reduce the inter-class similarity and inrta-class variance in standard training. To see if LS has the same effect in AT, we measured the inter-class similarity and intra-class variance in the PGD-AT and TRADES models with/without LS. As shown in Figure 6, we find LS can indeed reduce both similarity and variance.
Therefore, we argue that LS is more than a trick in AT. Compared to standard training, LS is more significant in AT for reducing the excessive inter-class similarity of robust models. As a result, in Table 1, we can see that LS can more significantly improve accuracy in both the PGD-AT and TRADES models than in the standard training model. We also measured the robustness under AutoAttack (AA) (Croce & Hein, 2020), which is currently the most effective adversarial attack, and we can see that LS also increases AA accuracy in PGD-AT by 1.12.
However, we find that when adding LS into TRADES, the AA robustness becomes 0.2 lower than the original TRADES model, as shown in Table 1. This could happen because LS will cause a loss of information in the logits, and hence weaken the discriminative power of the trained models (Müller et al., 2019), which could hurt the performance of semi-supervised learning (knowledge distillation in Müller et al. (2019)’s case and TRADES here). This pitfall of LS motivates us to investigate the effect of another classic smoothing regularizer, namely, the ME.
3.2 UNDERSTANDING THE EFFECT OF MAXIMUM ENTROPY
3.2.1 PROPOSED METHOD
ME learning. Let us first take a brief look at the ME learning. Let pθ(x) be the probability distribution of input x produced by a DNN model. The entropy of this conditional distribution is given by:
H(pθ(x)) = − ∑ i pθ(x)i log pθ(x)i
By adding the negative entropy term into the cross-entropy loss during training, the object of ME learning is defined as:
LME(θ) = CE(x,y)− βH (pθ(x)) , (β > 0) where y represents the one-hot labels, and CE is the cross-entropy function:
CE(x,y) = − ∑ i yi log pθ(x)i
TRADES is a special ME learning. Then, we show that the sota AT method TRADES is also a ME learning. Let us recall that the objective function of TRADES is: LTRADES(θ) = CE(x,y) + λ ·KL(pθ(x))‖pθ(x′)) where x′ is the adversarial example, and KL is the kl-divergence, which is given by:
KL(pθ(x))‖pθ(x′)) = ∑ i pθ(x)i log pθ(x)i − ∑ i pθ(x)i log pθ(x ′)i
= −H(pθ(x))− ∑ i pθ(x)i log pθ(x ′)i
And the TRADES objective function can be rewritten as:
LTRADES(θ) = CE(x,y)− λH(pθ(x))︸ ︷︷ ︸ maximum entropy learning −λ ∑ i pθ(x)i log pθ(x ′)i︸ ︷︷ ︸
adversarial cross-entropy
We can see that the left part corresponds to ME learning, and the right part is a cross-entropy loss
between the distribution of clean and adversarial examples. This finding reveals that TRADES is a special ME learning, as the most direct result, we can see that TRADES model have larger entropy than PGD-AT model on both clean and adversarial examples in Figure 7. Then, we notice that even the entropy value of the TRADES model is still far from the limit entropy value in a 10-classification condition (approximately 2.3), which means that there should be enough space for the PGD-AT and TRADES models to receive a stronger ME regularization. Based on this fact, we proposed the maximum entropy PGD-AT (ME-AT) and maximum entropy TRADES (ME-TRADES).
ME-AT & ME-TRADES. Here, we formulize the objective function of ME-AT and ME-TRADES. For ME-AT, we maximize the entropy of adversarial example distribution and we have the object:
LME−AT (θ) = CE(x′,y)− βH (pθ(x′)) , (β > 0)
For ME-TRADES, we augment the ME hyperparameter of clean example distribution in TRADES, and the object is:
LME−TRADES(θ) = CE(x,y)− (λ+ β)H(pθ(x))− λ ∑ i pθ(x)i log pθ(x ′)i, (β > 0)
and for code simplicity , we realize ME-TRADES in a more concise way as: LME−TRADES(θ) = CE(x,y) + λ ·KL(pθ(x))‖pθ(x′))− βH(pθ(x))
which only needs to add a negative entropy term into the original TRADES code.
3.2.2 EXPERIMENT
Training setting. Our experiments are based on CIFAR10, which is the most popular dataset in AT. We perform the standard CIFAR10 data augmentation: a random 4 pixel crop followed by a random horizontal flip. We train ResNet-18 for 100 epochs using SGD with 0.9 momentum, and the batch size is 64. The initial learning rate is 0.1 and reduced to 0.01 and 0.001 at epochs 75 and 90, respectively. The weight decay is 2× 10−4. We use the 10-step PGD adversary in training, and we set the perturbation size = 0.03125 under the `∞ norm and the step size is fixed to 0.008.
Test setting. To evaluate robustness, we use PGD-20, C&W-20 (Carlini & Wagner, 2017) and AA (Croce & Hein, 2020) to generate adversarial examples at = 0.03125 under the `∞ norm too. We report the test accuracy/robustness of the best checkpoint that achieves the highest robustness under PGD-20 on the test set.
Reduce inter-class similarity & intra-class variance. We first measure the inter-class similarity and intra-class variance of our ME-AT and ME-TRADES models. As shown in Figure 6, we can see that ME can more effectively help reduce both similarity and variance compared with LS. While TRADES is also a ME learning method, this effect of ME can explain why TRADES model have lower variance than PGD-AT model in Figure 2b. Note that the inter-class similarity of the TRADES model is little higher than that in the PGD-AT model in Figure 2a because the adversarial regularization hyperparameter λ = 6 corresponds to a strong adversarial regularization that makes the similarity higher. To clear see the effect of λ in terms of inter-class similarity and intra-class variance, we measured both inter-class similarity and intra-class variance at λ = 1, 6, 12 in TRADES, and when λ = 1, TRADES is in both lower similarity and variance than PGD-AT (see Figure 9 in the Appendix).
Enable larger hyperparameter λ. Then, we evaluate the performance of TRADES and our METRADES, as shown in Table 2. We find that ME-TRADES can adopt a higher adversarial regularization hyperparameter λ than the original TRADES, and we can see that when λ = 6, the robustness of TRADES reaches the maximum (49.95 AA accuracy). Then, the robustness decreases as the λ increases. However, we find that ME-TRADES reaches the maximum robustness when λ = 27, which is much larger than the maximum robustness parameter λ = 6 in the original TRADES, and the maximum robust accuracy is also higher (50.61 AA accuracy). This could be caused by the effect of ME that effectively reduces the inter-class similarity and intra-class variance, which could improve the effective capacity of the DNN model and allow the model to receive a stronger adversarial regularization, then help achieve higher robustness. Therefore, this effect of ME may explain why TRADES usually performs better than PGD-AT on robustness.
Mitigate the robustness-accuracy tradeoff. While the large model capacity has been shown crucial for reducing the robustness-accuracy tradeoff (Madry et al., 2017), we find that this possible improvement in effective capacity from ME can also help mitigate the tradeoff. We show the MEAT results in Table 3. When the ME hyperparameter β = 0.5, ME-AT performs better in terms of both robustness and accuracy than PGD-AT; therefore, ME can effectively mitigate the tradeoff.
Mitigate the robust fairness. In order to check whether ME-AT and ME-TRADES can mitigate the robust fairness problem, we follow Xu et al. (2021)’s setting that uses the worst class performance to measure the fairness, where a higher worst class accuracy/robustness means better fairness. How-
ever, we are not going to compare with the fair robust learning algorithm (FRL) proposed by Xu et al. (2021). FRL is proposed to solve the fairness problem, which can significantly increase the worst class performance; however, it also causes a worse average performance than the previous sota AT method TRADES. In contrast, in this paper, our ME-AT and ME-TRADES are first proposed to mitigate both robustness-accuracy tradeoff and robust fairness, i.e., increase the worst class performance and average performance at the same time, which is a more difficult task than only improving the worst class performance. Therefore, it is not appropriate to compare FRL to our method. As shown in Table 4, both ME and LS can help increase the worst class accuracy and robustness when added to PGD-AT. Because TRADES already contains the ME, we can see that TRADES performs better than PGD-AT with respect to fairness. We show the results of ME-TRADES when β = 1 and λ = 15, 27, which correspond the largest average accuracy and robustness parameters, respectively. ME-TRADES can further improve the worst class accuracy (λ = 15) and robustness (λ = 27) compared to TRADES.
Benchmark the AA leaderboard. To benchmark the robustness on the AA leaderboard, we combine our method with the robust self-training (RST) (Carmon et al., 2019), which adds 500K preprocessed data into training and obtains a value of 60.37 in the AA robustness at = 8/255 on CIFAR10. Training details are provided in Appendix A.3.1.
4 CONCLUSION
In this paper, we corroborate that AT will cause an increase in inter-class similarity, which could be the root of both the robustness-accuracy tradeoff and robust fairness phenomena. We confirm that ME can help reduce the excessive inter-class similarity in robust models, and also provides a lower intra-class variance, which is the remedy of previous AT methods. Our work could provide new insight into understanding and mitigating both the robustness-accuracy tradeoff and robust fairness.
A APPENDIX
A.1 TRAINING DETAILS AND RESULTS OF SINGLE AT EXPERIMENTS
Training details of single AT. Our single AT experiments performed on the ResNet-18 model. A linear layer (512×2) was added before the fc layer in the ResNet-18 model to output 2-D features, and the weight size of the fc layer was adjusted as (2×10) to output the logits. We set bias = None for the fc layer, which means we can estimate the similarity of two classes only from their included angle in the 2-D feature representations figure. For the single adversarial trained class, we perform a PGD-AT (Madry et al., 2017) at perturbation size = 0.03125 under the `∞ norm, and for other classes we perform the standard training. The single AT results of the 1st class car and the 3rd class cat has been shown in Figure 3, the results of the remaining classes are provided in Figure 8 below.
A.2 LARGER λ CAUSES HIGHER INTER-CLASS SIMILARITY
In Figure 9, we can see that with the adversarial regularization hyperparameter λ increases, the inter-class similarity also increases. And because of the effect of ME that reducing the intra-class variance, we also find the intra-class variance will decrease when the hyperparameter λ increases.
A.3 BENCHMARK THE AA LEADERBOARD
A.3.1 ME-RST
Carmon et al. (2019) used the additional pre-processed 500K data to train robust models, which is the RST procedure. To benchmark robustness in the AA (Croce & Hein, 2020) leaderboard, we combine the maximum entropy regularizer with the RST method (the ME-RST). To boost performance, we replaced the batch normalization (BN) layer to the batch-instance normalization (BIN) layer (Nam & Kim, 2018)in the WideResNet-28-10 model, reasons see A.3.2. We only change the λ to 27 (which is the highest robustness parameter in our ResNet-18 experiments) in the original RST settings1, and the ME hyperparameter β is set to be 1. Our results are tested at the perturbation size = 8/255 under the `∞ norm. We compare our method with the top-6 methods on the robustbench website2 which also perform on the WideResNet-28-10 model and use = 8/255 for testing robustness. Evaluation results are shown in Table 5.
A.3.2 ROBUST MODEL MAY NEED TO LEARN MORE SHAPE FEATURES
Inspired by our findings in this paper, we pose an open question: How does DNNs recognize two classes as similar in standard training? Because if we know that, then we can design a more robust model architecture with low inter-class similarity. However, this question is still a hard problem in deep learning, because it actually ask us to answer another question first: What kind of features does DNN models learned?
We noticed that a previous work has studied this question. Geirhos et al. (2018) showed that DNNs could be biased towards the texture feature, however loses the shape feature. This may explain why
1RST’s github https://github.com/yaircarmon/semisup-adv 2The AA leaderboard https://robustbench.github.io/
the class dog and cat are so hard to classify in our binary experiment: dog and cat have the similar texture feature (the hair). Therefore, robust models may need to pay more attention to the shape feature. To learn more shape feature, we attempted to replace BN layer as the BIN layer, which is proposed by Nam & Kim (2018) to balance the shape and texture features, and experimental results indicate that BIN can effectively help improve both accuracy and robustness in Table 5. | 1. What is the focus of the paper regarding inter-class similarity and intra-class variance?
2. What are the strengths of the paper, particularly in its experimental analysis?
3. What are the weaknesses of the paper, such as marginal improvements and missing references?
4. Do you have any questions about the paper's conclusions or proposed methods?
5. How does the reviewer assess the clarity and quality of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper investigates inter-class similarity and intra-class variance, and corroborates that AT will cause an increase in inter-class similarity, which could be the root of both the robustness-accuracy tradeoff and robust fairness phenomena. The authors first considers Label Smoothing (LS) as the regularizer, and concludes that LS will cause a loss of information in the logits, and hence weaken the discriminative power of the trained models. The authors then confirms that ME can help reduce the excessive inter-class similarity in robust models, and also provides a lower intra-class variance, which is the remedy of previous AT methods. Experiments partially support the conclusions of the paper.
Review
Strength:
The paper provides extensive experimental analysis (e.g., single adversarial training) for the potential reason of trade-off between robustness and accuracy and robust fairness. The paper concludes that high inter-class similarity might be the main cause.
The paper investigates the insight behind why TRADES performs better than PGD-AT.
The paper proposed new ME-based methods to further improve the adversarial robustness over baseline.
Weakness:
The improvement by ME is marginal compared with non-ME methods. Given that AA might not be the proxy of strongest adversarial attack, the true improvement might be smaller or even negative.
Regarding writing, in Section 3.1 the method of LS is not introduced. For those who are not familiar with LS, it is hard to understand the section.
The paper misses certain references which study the trade-off between robustness and accuracy, e.g., [1]. [1] A Closer Look at Accuracy vs. Robustness, NeurIPS 2020 |
ICLR | Title
FGNAS: FPGA-Aware Graph Neural Architecture Search
Abstract
The success of graph neural networks (GNNs) in the past years has aroused growing interest and effort in designing best models to handle graph-structured data. As the neural architecture search (NAS) technique has been witnessed to rival against human experts in discovering efficient network topology, recently, it has been applied to the field of graphic network engineering. However, such works on graphic NAS so far are purely software (SW) design and not considering hardware (HW) constraints at all, which often leads to sub-optimal system performance. To address this problem, we propose the first SW-HW co-design framework for automating the search and deployment of GNNs. Using FPGA as the target platform, our framework is able to perform the FPGA-aware graph neural architecture search (FGNAS). To evaluate our design, we experiment on benchmark datasets, namely Cora, CiteCeer, and PubMed, and the results show FGNAS has better capability in optimizing the accuracy of GNNs when their hardware implementation is specifically constrained.
1 INTRODUCTION
Graph neural networks (GNNs) are the state of the art in solving machine learning problems represented in graph forms, including social networking (Tan et al., 2019; Nurek & Michalski, 2019), molecular interaction (Huang et al., 2020; Spalević et al., 2020), and problems in Electronic Design Automation (EDA) (Ma et al., 2020; Ma et al., 2019), etc. As a result, GNN has attracted a great deal of research interest in deep learning community for both software (SW) (Wu et al., 2019; Li et al., 2015) and hardware (HW) (Wang et al., 2020; Zeng & Prasanna, 2020).
Similar to many other neural networks, the performance of GNN significantly depends on its neural architecture, and hence considerable effort has been put into tuning its computational components (Hamilton et al., 2017). Among the existing algorithms, message-passing has set the ground of spatial-based convolutional graph neural networks, from which most recent breakthough are derived (Gilmer et al., 2017). As the algorithmic variation increases, to identify better sub-structures of GNN tends to be substantially challenging due to the design space exponentially grows. On the other hand, however, the improvement of feature-extracting ability is still highly demanded.
Soon after being proposed by Zoph & Le (2016), neural architecture search has become a mainstream research topic of machine learning. It has been demonstrated NAS is promising to surpass the human experts and meanwhile liberate their laborious effort (Chen et al., 2018). Although the original NAS using reinforcement learning method suffers from timing inefficiency problem that following works strived to solve (Yan et al., 2019; Liu et al., 2019), it is well established thus adapted to be used for searching novel GNNs.
Quite lately, Gao et al. (2019) has designed the first graph NAS framework. Based on the stateof-art GNN methodology, Graph NAS has formulated the layered design space that is perferred to the controller. Besides, parameter sharing strategy is also adopted. Coincidentally, Zhou et al. (2019) has also used reinforcement learning to automate graph neural network design on similar search space but with split controllers. The search process is well guided in an incremental manner such that the sampling efficiency is boosted. Both of these works have improved the accuracy of GNN against existing hand-crafted networks, indicating NAS is the future solution for graph-based learning.
However, these works are only focusing on the neural architecture while the hardware implementation for GNNs (Geng et al., 2019) is equally important to the final performance. The hardware-aware NAS has been widely discussed for CNNs (Zhang et al., 2020; Wang et al., 2018). But, to our best knowledge, joint search of hardware and GNN architectures have not publicly reported. In this paper, we use Graph NAS with the hardware design objective and propose a software-hardware co-design framework. We employ FPGA as the vehicle for illustration and implementation of our methods. Specific hardware constraints are considered so quantization is adopted to compress the model. Under specific hardware constraints, we show our framework can successfully identify a solution of higher accuracy but using shorter time than random search and the traditional two-step tuning.
2 PROBLEM FORMULATION
The problem of jointly searching graph neural network architectures and hardware design can be formulated as the following. Given an architecture space A, each sample a ∈ A characterizes a hardware space H(a). The objective is then to find the optimal architecture and hardware design pair 〈a∗, h∗〉 such that a∗ ∈ A and h ∈ H(a∗). With the target dataset Dt for training and Dv for validation, the accuracy of a design can be measured as acct(a, h) and accv(a, h), respectively, while the hardware performance hp(a, h) is independent of the data. As the neural architecture sample is parameterized by the weights w, we define the optimality of the design as
a∗ = arg max a∈A accv(a(w ∗), h∗)
s.t. : w∗ = arg max w acct(a(w), h ∗)
(1)
and at the same time
h∗ = arg max h∈H(a∗) hp(a∗, h) s.t. : hp(a∗, h∗) ≥ spec (2)
where spec is the hardware specification required to be satisfied by the design.
However, there is a problem with the above formulation that is challenging to implementation. In the case where the specification of hardware relate to multiple objectives, e.g. area and latency, the hardware performance is not a scalar and hence the optimization is ambiguous. In practice, the design is acceptable as long as the hardware constraints are met. In order to optimize the hardware design, one can set more and more strict constraints to the aspect of interest. Therefore, we relax the optimization of hardware performance to the hardware eligibility, and reformulate the problem as
a∗ = arg max a∈A accv(a(w ∗), h)
s.t. : w∗ = arg max w
acct(a(w), h) (3)
and
∃h ∈ H(a∗) s.t. : hp(a∗, h) ≥ spec. (4)
It is worth mentioning when the hardware constraint has two and more dimensions, the ≥ symbol applies to every dimension.
In this work, we rely on the recurrent neural network to jointly optimize both the GNN architecture and its hardware design. As such, the reinforcement learning NAS framework is restructured to coexploring the software and hardware spaces. Based on the above formulation, our framework aims to discover the best neural architectures which are guaranteed to be implementable.
3 FGNAS
In this section, we delve into the details of our FPGA-aware graph nerual architecture search (FGNAS) framework. As shown in Figure 1, there are three main components comprising FGNAS,
namely the controller, the FPGA model builder, and the gnn model trainer. For each layer of the child network, our controller generates the parameters of three types defining the network topology, hardware realization, and the precision. With each sample of the controller, a hardware model will be firstly constructed and evaluated against the predefined constraints. Since most samples may not be implementable, their training are circumvented and rewards assigned to be 0; otherwise the network will be built, trained and validated. Finally, when a mini-batch of samples are evaluated, the parameters of the controller will be updated once. The process terminates after a certain number of episodes.
3.1 SEARCH SPACE
We divide the search space into two sub-spaces: architectural space and hardware space. For each layer of a GNN, the search spaces are completely the same so the same types of parameters are sampled. For illustration convenience, we divide the parameters of a single layer and describe them as follows.
3.1.1 ARCHITECTURAL SPACE
The architectural space contains the parameters that defines the operational mechanism of graph network. As the time of writing, the topologies of GNNs share message-passing computational flow characterized by graph-wise convolution, and only vary in the way embedded features are generated and combined. In consequence, we define the architecture space regarding the tuning of sub-structures.
Basically, three separate stages are cascaded in each layer: (1) the embedding form last layer are linearly converted; (2) messages between each connected pair of nodes are weighted; and (3) new features of neig‘hbouring nodes are aggregated to produce new embedding. Following the three operations, five parameters are included in the architectural space.
• Embedding Dimension. The embedding represents the features of the nodes extracted by the hidden layers. A linear operation is applied to convert the previous embedding into another space of d dimensions.
• Attention Type. The attention type refers to how the messages between connected nodes are weighted. For the new temporary embedding Hki,j , a coefficient is firstly computed for weighting it during the aggregation phase.
• Aggregation Type. For all the incoming messages, there different ways in mixing them to produce the new features. The common methods are namely, taking the summation, mean, and maximum.
• Number of Heads. We apply multi-headed attention to the GNN architecture as it is commonly used to stablize the performance. Heads of the same message are concatenated for every layer except the last one where they are averaged to match the output dimension.
• Activation Function. The activation function can add nonlinearity to the embedding. Considering the hardware constraints, we include four options for nonlinearity: “relu”, “tanh”, “sigmoid”, and “elu”.
3.1.2 HARDWARE SPACE
The computation of GNN for inference are all parallelizable in terms of the features of the same embedding. As a large dimension would require exponentially complex computation, it is necessary to divide the vector-wise operation into sub-tasks. Therefore, we choose the size for grouping the features as a key parameter to scale the hardware.
Almost all the main tasks can be divided, and we summarize them into four cases:
1. For the embedding to transform from Ti to To features, two parameters ti and to are used for grouping them separately.
2. The attention coefficients possibly also require linear operation but the output is a scalar, so we only divide the input by size of tattn.
3. The aggregation is similar to the above case in that there is only one output. We also assign one parameter taggr for it.
4. Lastly, the nonlinearity requires one-to-one operation on the feature vector. As this is probably the most challenging operation for hardware, we also group the features into size of tact.
In addition to the architectural and hardware space, we also consider the mixed-precision design which play important roles in both software and hardware performance. In this case, the quantization space also needs to be explored and details is discussed in Section 3.4.
3.2 ALGORITHM
Reinforcement learning is applied in our design as the searching backbone. As we have parameterized the design of both architecture and hardware and formatted these parameters by layer, one RNN can be employed to sample the parameters sequentially as actions from the respective list of options. For the sampled design, the hardware performance is analyzed using our FPGA model, under the constraints of resources and latency. Only if the sample hardware design satisfy the hardware specifications, will the software design be trained and tested on the dataset. The reward for the sample 〈a, h〉 is then
R(a, h) =
{ 0, hp(a, h) < spec
accv(a, h), otherwise (5)
This way, the training can be circumvented as possible and the search can be faster than pure NAS.
Once the reward is obtained, the parameter θ of the controller is updated following the policy gradient rule (Williams, 1992):
∇J(θ) = 1 m m∑ k=1 T∑ t=1 γT−t∇θ log πθ(at|a(t−1):1)(Rk − b) (6)
where J(θ) is the expected reward at the initial step.
The controller is configured as the following. The number of steps T equals the total number of parameters to be sampled; the batch size for updating θ is m = 5 episodes; the reward is not discounted so γ = 1; and baseline b is the exponential moving average of the reward with a decaying factor of 0.9.
3.3 FPGA MODELING
We adopt a generic FPGA design model that is widely used for CNN acclerators (Zhang et al., 2015). Figure 2 illustrates the block diagram of the hardware segment for one layer. For each layer four stages are pipelined consisting of the linear transform, attention coefficient computation, aggregation, and nonlinear operation. The messages in-between consecutive stages are registered. Two buffers are employed to resolve the read/write conflict by alternately accessing the main memory and serving the computational units. As mentioned above, this model is fully scalable in the dimension of the embedded features baded on the parameteres defined.
3.4 MIXED PRECISION
We also consider the mixed-precision scenario in our design where data are quantized using different bit width. Like the other parameters, quantization parameters are also arranged by layer so data in the same layer share the same format. As the methods for quantizing is plentiful and have significant impact on the model accuracy, we avoid the variation of them and simply adopt the post-training quantization (PTQ) and linear quantization as follows.
Given the quantization interval ∆ and range bounded by Bmin and Bmax, the quantization of real number x is
x̂ = clip(bx/∆e ×∆, Bmin, Bmax), (7)
where be is rounding to integers. For the fixed-point format with sign, ∆, Bmin and Bmax are determined by the number of bits allocated to the integral (bi) and fractional (bf ) part as
∆ = 2−bf , Bmin = −2bi, Bmax = 2bi −∆. (8)
Consequently, in the mixed-precision design, four parameters are added to the search space namely wi, wf for the weights and ai and af for the activation. With the mixed precision, the hardware space exponentially increases, and the components in our FPGA model requires to be configured by bitwidth. We rely on the HLS tool of Xilinx to synthesize all configurations to profile the sizes and latency information. The synthesis result of sample operational units are shown in the supplimental material. It is noted the impact of quantization on hardware significantly vary among operators.
4 EXPERIMENT
In this section we test the performance of FGNAS on holdout graph datasets of node classification utility. To study the search efficiency, both of the test accuracy and searching time are evaluated and compared. The experiments are carried out using single Nvidia 1080Ti graphic processing unit (GPU), and Intel 8700K CPU. There is no dedication for FPGA chips, but we use Xinlix devices for reference. We assume the clock rate is 100 MHz throughout all the experiments. It is noted that since we constraints the hardware, comparing the accuracy to the state-of-art networks are not quite sensible and instead we evaluate the searching efficiency against baseline methods.
4.1 DATASET
Three datasets are used for benchmarking the performance on transductive learning, namely Cora, CiteSeer, and PubMed. The statistics and training configuration is listed in Table 2. The setting for training on these datasets follows that of Zhou et al. (2019). Since the volumns and complexity of the datasets vary largely, the hardware of the search is constrained differently and accordingly.
4.2 BASELINE METHOD
To evaluate the ability and efficiency of FGNAS, two methods are considered as baseline and experimented in parallel with our method.
Random Search. We perform a random search approach as the baseline of search efficiency. The random search results can reflect the distribution of candidate solutions in specific design space. For certain data and hardware constraints, the random search can render decent result already.
Separate Search. The traditional method of two-phase design philosophy cannot fully explore the design space joined by hardware and architectural subspaces. In this philosophy, a fixed pure network architecture is firstly selected (by handcraft or automation), and afterwards a hardware design is customized for this specific architecture. Therefore, it explores only a fraction of the design space containing every combination of architecture-hardware pair.
To show the advantage of our co-design method over the separate design, we follow the above pipeline and partially use our framework to perform a pure architecture search followed by a pure hardware search based on the best network found.
4.3 SEARCHING DETAILS
The actual search space used throughput the experiments are shown in Table 1. During the search, the controller is updated with ordinary SGD algorithm and a constant learning rate of 0.1. When a child network is sampled and hardware verified, it will be trained using Adam optimizer for 200 epochs. The validation is performed after every epoch, from which the highest will be taken as the reward to the controller. By rule of thumb, we set the depth of the child networks as two layers.
The searching stops after sampling 2000 episodes. With hardware constraints, however, most samples in both joint and random search may not be valid so the training can be saved. Consequently for fair comparison, we use the total number of trained samples to guide the random search such that the GPU hours would be on the same scale. In the case of separate search, the GPU time is completely defined by the episode quantity and we set 200 as for the architecture search and 800 for the hardware search. Each experiment includes 5 runs, and the one with the highest test accuracy is taken for evaluation. With the selected run, we report the accuracy of both the best sample as well as the top-10 samples averaged.
4.4 PERFORMANCE
We test the seaching efficiency of our method across variational hardware parameters in latency, number of LUTs/FFs and number DSPs. The result on Cora is shown in Table 3. In general, the joint search ahieves the best accuracy and shortes searching time while there exists variance.
4.4.1 COMPARING WITH RANDOM SEARCH
The random search is already very performant in the sense that the highest accuracy are discoverable at certain hardware constraints. For example, with 1 ms latency, 100,000 LUTs/FFs and 100 DSPs, it achieves the best accuracy among the three methods. However, when the constraints are more narrow the distribution of decent sampels are far more sparse. As a result, the best accuracy covered by searching a fixed number of samples is lower than the other two methods.
The search time of random method is around 1x to 2x of the joint search. There are two explanations for that. Firstly, the sampled newtorks are more scattered so their average size is larger. Although the GPU calls are equal, the training time of randomly sampled networks are higher. Another reason is that in order to reach the same number of implementable samples as joint search, much more episodes needs to be inspected so the CPU time adds up to a coniderable level.
4.4.2 COMPARING WITH SEPARATE SEARCH
The separate search consumes highest time with our setting because 1) more sampels are actually trained due to the manual setup; and 2) the architecture found in the first step is larger than average size. It is observed that accuracy is slightly better than random search and in some cases surpass the joint search. However, since the pure architecture search are not aware of the hardware constraints at all, the post-quantization accuracy may degrade severly as decent bit width allocation hardly exists.
4.4.3 IN-DEPTH OBSERVATION
The experimental results concludes that our method exlpores the design space more efficiently than the baselines. It achieves the best accuracy in most hardware cases while runs 1x - 3x faster. The advantage owes to the fact the SW/HW co-search explores the design space in a local region approaching to the constrained area. Figure 3 plots actual hardware statistics of the searched samples projected onto three usage-latency planes. It is shown the Pareto frontier of the joint search method is closet the valid area constrained by the hardware among all the methods.
5 CONCLUSION
Neural architecture search is a promising solution for the advancentment of graph neural network engineering, but it lacks hardware awareness. In this work we propose to an FPGA-based SW/HW co-design framework, named FGNAS, that jointly explores the architectural and hardware spaces. Using reinforcemnt learning, generic hardware model, and mixed precision design, FGNAS performs evidently more efficient than the random search and traditional separate methods. Under different hardware constraints, FGNAS has the best accuracy in majority of the cases with 1x-3x faster running time. Besides, the cause of the advantage is discussed from statistical analysis. | 1. What is the main contribution of the paper, and how does it address the problem of NAS for GNN architectures under hardware constraints?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its novelty and impact compared to prior works?
3. How does the reviewer assess the quality and clarity of the paper's content, including figures, legends, terminology, and grammar?
4. What are the limitations of the experimental results, such as the small search space, few baselines, and lack of real-world testing?
5. How could the paper be improved, such as adding more novelty to the core method, expanding the literature review, and conducting more comprehensive experiments? | Review | Review
Summary This paper introduces a method for NAS of GNN architectures under hardware constraints. They describe results on a few datasets.
Strengths The authors point out that GNNs architectures obtained from NAS can be inefficient when deployed on hardwares and that there is an opportunity to jointly optimize the architecture and the hardware together.
Weaknesses
Very poorly written
a) Figures and legends not explained properly.
b) Proper literature survey for multi-objective NAS not done. e.g. see https://arxiv.org/abs/1804.09081 which links to several other papers.
c) Terminology not introduced.
i) What is LUT, FF and DSP?
ii) Sec2P1L4 should be h* \in \mathcal{H}(a*)
d) Riddled with typos and grammar mistakes
“...Using reinforcemnt learning…”
“...We test the seaching efficiency of our method…”
“...the joint search ahieves the best accuracy ..”
“...and shortes searching time…”
“...The best accuacry result on differnt datasets.”..
“..distribution of decent sampels ..”
“...adds up to a coniderable level….”
“..more *sampels( are …”
“...that is perferred to the controller…”
Low novelty a) The problem formulation is not that interesting. It boils down to multi-objective NAS with an expanded hardware search space.
b) Their NAS algorithm is not novel. Mostly uses the original NAS from Zoph et. al.
c) Hardware constraints are implemented with the rejection sampling. Samples with hardware realizations outside spec are awarded score zero. It would be much more interesting to solve the joint problem or find the best hardware configuration for a given architecture.
Low impact and weak experiments a) Focus on GNNs and FPGAs seems unnecessary. The problem is more generic in reality i.e. multi-objective NAS. The GNNs and FPGAs are only a specific application.
b) Experimental results are weak. A small search space defined in Table 2. Few baselines - random and phased search.
c) Figure 3 should also plot accuracy.
d) The authors don’t report SOTA accuracy results. What’s the delta from SOTA?
e) The results are not tested on real hardware and the hardware search space seems arbitrary. Do the results generalize to arbitrary hardware spaces e.g. those with very non-linear objectives?
What could make the paper better
a) Paper writing needs to dramatically improve. The manuscript is very premature and needs a lot of proofreading and polishing.
b) Proper literature review is also needed.
c) The core method lacks novelty. It’s simply an instance of multi-objective NAS for which there are many methods.
d) Experiments are lacking.
i) Need more than 2 baselines. Random and phased search is not sufficient.
ii) Need more generic hardware architectures and complicated search spaces.
iii) Need to test realizations on real hardware to show real world impact. Otherwise optimization remains theoretical. |
ICLR | Title
FGNAS: FPGA-Aware Graph Neural Architecture Search
Abstract
The success of graph neural networks (GNNs) in the past years has aroused growing interest and effort in designing best models to handle graph-structured data. As the neural architecture search (NAS) technique has been witnessed to rival against human experts in discovering efficient network topology, recently, it has been applied to the field of graphic network engineering. However, such works on graphic NAS so far are purely software (SW) design and not considering hardware (HW) constraints at all, which often leads to sub-optimal system performance. To address this problem, we propose the first SW-HW co-design framework for automating the search and deployment of GNNs. Using FPGA as the target platform, our framework is able to perform the FPGA-aware graph neural architecture search (FGNAS). To evaluate our design, we experiment on benchmark datasets, namely Cora, CiteCeer, and PubMed, and the results show FGNAS has better capability in optimizing the accuracy of GNNs when their hardware implementation is specifically constrained.
1 INTRODUCTION
Graph neural networks (GNNs) are the state of the art in solving machine learning problems represented in graph forms, including social networking (Tan et al., 2019; Nurek & Michalski, 2019), molecular interaction (Huang et al., 2020; Spalević et al., 2020), and problems in Electronic Design Automation (EDA) (Ma et al., 2020; Ma et al., 2019), etc. As a result, GNN has attracted a great deal of research interest in deep learning community for both software (SW) (Wu et al., 2019; Li et al., 2015) and hardware (HW) (Wang et al., 2020; Zeng & Prasanna, 2020).
Similar to many other neural networks, the performance of GNN significantly depends on its neural architecture, and hence considerable effort has been put into tuning its computational components (Hamilton et al., 2017). Among the existing algorithms, message-passing has set the ground of spatial-based convolutional graph neural networks, from which most recent breakthough are derived (Gilmer et al., 2017). As the algorithmic variation increases, to identify better sub-structures of GNN tends to be substantially challenging due to the design space exponentially grows. On the other hand, however, the improvement of feature-extracting ability is still highly demanded.
Soon after being proposed by Zoph & Le (2016), neural architecture search has become a mainstream research topic of machine learning. It has been demonstrated NAS is promising to surpass the human experts and meanwhile liberate their laborious effort (Chen et al., 2018). Although the original NAS using reinforcement learning method suffers from timing inefficiency problem that following works strived to solve (Yan et al., 2019; Liu et al., 2019), it is well established thus adapted to be used for searching novel GNNs.
Quite lately, Gao et al. (2019) has designed the first graph NAS framework. Based on the stateof-art GNN methodology, Graph NAS has formulated the layered design space that is perferred to the controller. Besides, parameter sharing strategy is also adopted. Coincidentally, Zhou et al. (2019) has also used reinforcement learning to automate graph neural network design on similar search space but with split controllers. The search process is well guided in an incremental manner such that the sampling efficiency is boosted. Both of these works have improved the accuracy of GNN against existing hand-crafted networks, indicating NAS is the future solution for graph-based learning.
However, these works are only focusing on the neural architecture while the hardware implementation for GNNs (Geng et al., 2019) is equally important to the final performance. The hardware-aware NAS has been widely discussed for CNNs (Zhang et al., 2020; Wang et al., 2018). But, to our best knowledge, joint search of hardware and GNN architectures have not publicly reported. In this paper, we use Graph NAS with the hardware design objective and propose a software-hardware co-design framework. We employ FPGA as the vehicle for illustration and implementation of our methods. Specific hardware constraints are considered so quantization is adopted to compress the model. Under specific hardware constraints, we show our framework can successfully identify a solution of higher accuracy but using shorter time than random search and the traditional two-step tuning.
2 PROBLEM FORMULATION
The problem of jointly searching graph neural network architectures and hardware design can be formulated as the following. Given an architecture space A, each sample a ∈ A characterizes a hardware space H(a). The objective is then to find the optimal architecture and hardware design pair 〈a∗, h∗〉 such that a∗ ∈ A and h ∈ H(a∗). With the target dataset Dt for training and Dv for validation, the accuracy of a design can be measured as acct(a, h) and accv(a, h), respectively, while the hardware performance hp(a, h) is independent of the data. As the neural architecture sample is parameterized by the weights w, we define the optimality of the design as
a∗ = arg max a∈A accv(a(w ∗), h∗)
s.t. : w∗ = arg max w acct(a(w), h ∗)
(1)
and at the same time
h∗ = arg max h∈H(a∗) hp(a∗, h) s.t. : hp(a∗, h∗) ≥ spec (2)
where spec is the hardware specification required to be satisfied by the design.
However, there is a problem with the above formulation that is challenging to implementation. In the case where the specification of hardware relate to multiple objectives, e.g. area and latency, the hardware performance is not a scalar and hence the optimization is ambiguous. In practice, the design is acceptable as long as the hardware constraints are met. In order to optimize the hardware design, one can set more and more strict constraints to the aspect of interest. Therefore, we relax the optimization of hardware performance to the hardware eligibility, and reformulate the problem as
a∗ = arg max a∈A accv(a(w ∗), h)
s.t. : w∗ = arg max w
acct(a(w), h) (3)
and
∃h ∈ H(a∗) s.t. : hp(a∗, h) ≥ spec. (4)
It is worth mentioning when the hardware constraint has two and more dimensions, the ≥ symbol applies to every dimension.
In this work, we rely on the recurrent neural network to jointly optimize both the GNN architecture and its hardware design. As such, the reinforcement learning NAS framework is restructured to coexploring the software and hardware spaces. Based on the above formulation, our framework aims to discover the best neural architectures which are guaranteed to be implementable.
3 FGNAS
In this section, we delve into the details of our FPGA-aware graph nerual architecture search (FGNAS) framework. As shown in Figure 1, there are three main components comprising FGNAS,
namely the controller, the FPGA model builder, and the gnn model trainer. For each layer of the child network, our controller generates the parameters of three types defining the network topology, hardware realization, and the precision. With each sample of the controller, a hardware model will be firstly constructed and evaluated against the predefined constraints. Since most samples may not be implementable, their training are circumvented and rewards assigned to be 0; otherwise the network will be built, trained and validated. Finally, when a mini-batch of samples are evaluated, the parameters of the controller will be updated once. The process terminates after a certain number of episodes.
3.1 SEARCH SPACE
We divide the search space into two sub-spaces: architectural space and hardware space. For each layer of a GNN, the search spaces are completely the same so the same types of parameters are sampled. For illustration convenience, we divide the parameters of a single layer and describe them as follows.
3.1.1 ARCHITECTURAL SPACE
The architectural space contains the parameters that defines the operational mechanism of graph network. As the time of writing, the topologies of GNNs share message-passing computational flow characterized by graph-wise convolution, and only vary in the way embedded features are generated and combined. In consequence, we define the architecture space regarding the tuning of sub-structures.
Basically, three separate stages are cascaded in each layer: (1) the embedding form last layer are linearly converted; (2) messages between each connected pair of nodes are weighted; and (3) new features of neig‘hbouring nodes are aggregated to produce new embedding. Following the three operations, five parameters are included in the architectural space.
• Embedding Dimension. The embedding represents the features of the nodes extracted by the hidden layers. A linear operation is applied to convert the previous embedding into another space of d dimensions.
• Attention Type. The attention type refers to how the messages between connected nodes are weighted. For the new temporary embedding Hki,j , a coefficient is firstly computed for weighting it during the aggregation phase.
• Aggregation Type. For all the incoming messages, there different ways in mixing them to produce the new features. The common methods are namely, taking the summation, mean, and maximum.
• Number of Heads. We apply multi-headed attention to the GNN architecture as it is commonly used to stablize the performance. Heads of the same message are concatenated for every layer except the last one where they are averaged to match the output dimension.
• Activation Function. The activation function can add nonlinearity to the embedding. Considering the hardware constraints, we include four options for nonlinearity: “relu”, “tanh”, “sigmoid”, and “elu”.
3.1.2 HARDWARE SPACE
The computation of GNN for inference are all parallelizable in terms of the features of the same embedding. As a large dimension would require exponentially complex computation, it is necessary to divide the vector-wise operation into sub-tasks. Therefore, we choose the size for grouping the features as a key parameter to scale the hardware.
Almost all the main tasks can be divided, and we summarize them into four cases:
1. For the embedding to transform from Ti to To features, two parameters ti and to are used for grouping them separately.
2. The attention coefficients possibly also require linear operation but the output is a scalar, so we only divide the input by size of tattn.
3. The aggregation is similar to the above case in that there is only one output. We also assign one parameter taggr for it.
4. Lastly, the nonlinearity requires one-to-one operation on the feature vector. As this is probably the most challenging operation for hardware, we also group the features into size of tact.
In addition to the architectural and hardware space, we also consider the mixed-precision design which play important roles in both software and hardware performance. In this case, the quantization space also needs to be explored and details is discussed in Section 3.4.
3.2 ALGORITHM
Reinforcement learning is applied in our design as the searching backbone. As we have parameterized the design of both architecture and hardware and formatted these parameters by layer, one RNN can be employed to sample the parameters sequentially as actions from the respective list of options. For the sampled design, the hardware performance is analyzed using our FPGA model, under the constraints of resources and latency. Only if the sample hardware design satisfy the hardware specifications, will the software design be trained and tested on the dataset. The reward for the sample 〈a, h〉 is then
R(a, h) =
{ 0, hp(a, h) < spec
accv(a, h), otherwise (5)
This way, the training can be circumvented as possible and the search can be faster than pure NAS.
Once the reward is obtained, the parameter θ of the controller is updated following the policy gradient rule (Williams, 1992):
∇J(θ) = 1 m m∑ k=1 T∑ t=1 γT−t∇θ log πθ(at|a(t−1):1)(Rk − b) (6)
where J(θ) is the expected reward at the initial step.
The controller is configured as the following. The number of steps T equals the total number of parameters to be sampled; the batch size for updating θ is m = 5 episodes; the reward is not discounted so γ = 1; and baseline b is the exponential moving average of the reward with a decaying factor of 0.9.
3.3 FPGA MODELING
We adopt a generic FPGA design model that is widely used for CNN acclerators (Zhang et al., 2015). Figure 2 illustrates the block diagram of the hardware segment for one layer. For each layer four stages are pipelined consisting of the linear transform, attention coefficient computation, aggregation, and nonlinear operation. The messages in-between consecutive stages are registered. Two buffers are employed to resolve the read/write conflict by alternately accessing the main memory and serving the computational units. As mentioned above, this model is fully scalable in the dimension of the embedded features baded on the parameteres defined.
3.4 MIXED PRECISION
We also consider the mixed-precision scenario in our design where data are quantized using different bit width. Like the other parameters, quantization parameters are also arranged by layer so data in the same layer share the same format. As the methods for quantizing is plentiful and have significant impact on the model accuracy, we avoid the variation of them and simply adopt the post-training quantization (PTQ) and linear quantization as follows.
Given the quantization interval ∆ and range bounded by Bmin and Bmax, the quantization of real number x is
x̂ = clip(bx/∆e ×∆, Bmin, Bmax), (7)
where be is rounding to integers. For the fixed-point format with sign, ∆, Bmin and Bmax are determined by the number of bits allocated to the integral (bi) and fractional (bf ) part as
∆ = 2−bf , Bmin = −2bi, Bmax = 2bi −∆. (8)
Consequently, in the mixed-precision design, four parameters are added to the search space namely wi, wf for the weights and ai and af for the activation. With the mixed precision, the hardware space exponentially increases, and the components in our FPGA model requires to be configured by bitwidth. We rely on the HLS tool of Xilinx to synthesize all configurations to profile the sizes and latency information. The synthesis result of sample operational units are shown in the supplimental material. It is noted the impact of quantization on hardware significantly vary among operators.
4 EXPERIMENT
In this section we test the performance of FGNAS on holdout graph datasets of node classification utility. To study the search efficiency, both of the test accuracy and searching time are evaluated and compared. The experiments are carried out using single Nvidia 1080Ti graphic processing unit (GPU), and Intel 8700K CPU. There is no dedication for FPGA chips, but we use Xinlix devices for reference. We assume the clock rate is 100 MHz throughout all the experiments. It is noted that since we constraints the hardware, comparing the accuracy to the state-of-art networks are not quite sensible and instead we evaluate the searching efficiency against baseline methods.
4.1 DATASET
Three datasets are used for benchmarking the performance on transductive learning, namely Cora, CiteSeer, and PubMed. The statistics and training configuration is listed in Table 2. The setting for training on these datasets follows that of Zhou et al. (2019). Since the volumns and complexity of the datasets vary largely, the hardware of the search is constrained differently and accordingly.
4.2 BASELINE METHOD
To evaluate the ability and efficiency of FGNAS, two methods are considered as baseline and experimented in parallel with our method.
Random Search. We perform a random search approach as the baseline of search efficiency. The random search results can reflect the distribution of candidate solutions in specific design space. For certain data and hardware constraints, the random search can render decent result already.
Separate Search. The traditional method of two-phase design philosophy cannot fully explore the design space joined by hardware and architectural subspaces. In this philosophy, a fixed pure network architecture is firstly selected (by handcraft or automation), and afterwards a hardware design is customized for this specific architecture. Therefore, it explores only a fraction of the design space containing every combination of architecture-hardware pair.
To show the advantage of our co-design method over the separate design, we follow the above pipeline and partially use our framework to perform a pure architecture search followed by a pure hardware search based on the best network found.
4.3 SEARCHING DETAILS
The actual search space used throughput the experiments are shown in Table 1. During the search, the controller is updated with ordinary SGD algorithm and a constant learning rate of 0.1. When a child network is sampled and hardware verified, it will be trained using Adam optimizer for 200 epochs. The validation is performed after every epoch, from which the highest will be taken as the reward to the controller. By rule of thumb, we set the depth of the child networks as two layers.
The searching stops after sampling 2000 episodes. With hardware constraints, however, most samples in both joint and random search may not be valid so the training can be saved. Consequently for fair comparison, we use the total number of trained samples to guide the random search such that the GPU hours would be on the same scale. In the case of separate search, the GPU time is completely defined by the episode quantity and we set 200 as for the architecture search and 800 for the hardware search. Each experiment includes 5 runs, and the one with the highest test accuracy is taken for evaluation. With the selected run, we report the accuracy of both the best sample as well as the top-10 samples averaged.
4.4 PERFORMANCE
We test the seaching efficiency of our method across variational hardware parameters in latency, number of LUTs/FFs and number DSPs. The result on Cora is shown in Table 3. In general, the joint search ahieves the best accuracy and shortes searching time while there exists variance.
4.4.1 COMPARING WITH RANDOM SEARCH
The random search is already very performant in the sense that the highest accuracy are discoverable at certain hardware constraints. For example, with 1 ms latency, 100,000 LUTs/FFs and 100 DSPs, it achieves the best accuracy among the three methods. However, when the constraints are more narrow the distribution of decent sampels are far more sparse. As a result, the best accuracy covered by searching a fixed number of samples is lower than the other two methods.
The search time of random method is around 1x to 2x of the joint search. There are two explanations for that. Firstly, the sampled newtorks are more scattered so their average size is larger. Although the GPU calls are equal, the training time of randomly sampled networks are higher. Another reason is that in order to reach the same number of implementable samples as joint search, much more episodes needs to be inspected so the CPU time adds up to a coniderable level.
4.4.2 COMPARING WITH SEPARATE SEARCH
The separate search consumes highest time with our setting because 1) more sampels are actually trained due to the manual setup; and 2) the architecture found in the first step is larger than average size. It is observed that accuracy is slightly better than random search and in some cases surpass the joint search. However, since the pure architecture search are not aware of the hardware constraints at all, the post-quantization accuracy may degrade severly as decent bit width allocation hardly exists.
4.4.3 IN-DEPTH OBSERVATION
The experimental results concludes that our method exlpores the design space more efficiently than the baselines. It achieves the best accuracy in most hardware cases while runs 1x - 3x faster. The advantage owes to the fact the SW/HW co-search explores the design space in a local region approaching to the constrained area. Figure 3 plots actual hardware statistics of the searched samples projected onto three usage-latency planes. It is shown the Pareto frontier of the joint search method is closet the valid area constrained by the hardware among all the methods.
5 CONCLUSION
Neural architecture search is a promising solution for the advancentment of graph neural network engineering, but it lacks hardware awareness. In this work we propose to an FPGA-based SW/HW co-design framework, named FGNAS, that jointly explores the architectural and hardware spaces. Using reinforcemnt learning, generic hardware model, and mixed precision design, FGNAS performs evidently more efficient than the random search and traditional separate methods. Under different hardware constraints, FGNAS has the best accuracy in majority of the cases with 1x-3x faster running time. Besides, the cause of the advantage is discussed from statistical analysis. | 1. What is the focus of the paper regarding hardware-aware GNN architectural search?
2. What are the strengths and weaknesses of the proposed methodologies in the paper?
3. How does the reviewer assess the novelty and originality of the paper's contributions?
4. What are the concerns regarding the FPGA implementation and hardware cost modeling in the paper?
5. Do you have any suggestions for improving the paper or its contributions? | Review | Review
The paper provides a method for hardware-aware GNN architectural search using reinforcement learning, showing some improvements over random search and disjoint architecture/hardware optimization.
Pros: The paper is well-structured and easy to follow and the methods are fully conveyed to the reader. The general topic of the paper is of practical importance, however, the proposed methodologies are not entirely novel and rather borrowed from prior works in NAS for CNN/GNN design (mode details below).
Cons:
Perhaps my biggest concern about the paper is the limited novelty. The GNN architecture search components (i.e., search-space) as well as the reinforcement learning method used are previously studied in the literature (e.g., Gao et al. 2019 in the paper). The FPGA implementation of the GNN is also based on a previously published work (Zhang et al. 2015 in the paper). The utilized FPGA cost model and optimization scheme has also been studied in the context of NAS for CNNs with a very similar (even slightly more complex) setup [1]. The effort to transition from a CNN to a GNN is not enough to justify the methods in the paper as a standalone contribution.
The hardware cost in its current shape merely discards invalid configurations but does not perform any ranking on the "valid" configurations. As such, the term "joint optimization" of hardware and GNN is misleading since the hardware aspect only performs a sanity check on the design constraints but does not differentiate between configs that comply with the said constraints. Different hardware configs certainly have different characteristics in terms of latency, memory, power, area, etc. A better formulation thus would have been to model the hardware cost such that it differentiates between valid configs in terms of the above characteristics.
[1] Jiang, Weiwen, et al. "Accuracy vs. efficiency: Achieving both through fpga-implementation aware neural architecture search." Proceedings of the 56th Annual Design Automation Conference 2019. 2019. |
ICLR | Title
FGNAS: FPGA-Aware Graph Neural Architecture Search
Abstract
The success of graph neural networks (GNNs) in the past years has aroused growing interest and effort in designing best models to handle graph-structured data. As the neural architecture search (NAS) technique has been witnessed to rival against human experts in discovering efficient network topology, recently, it has been applied to the field of graphic network engineering. However, such works on graphic NAS so far are purely software (SW) design and not considering hardware (HW) constraints at all, which often leads to sub-optimal system performance. To address this problem, we propose the first SW-HW co-design framework for automating the search and deployment of GNNs. Using FPGA as the target platform, our framework is able to perform the FPGA-aware graph neural architecture search (FGNAS). To evaluate our design, we experiment on benchmark datasets, namely Cora, CiteCeer, and PubMed, and the results show FGNAS has better capability in optimizing the accuracy of GNNs when their hardware implementation is specifically constrained.
1 INTRODUCTION
Graph neural networks (GNNs) are the state of the art in solving machine learning problems represented in graph forms, including social networking (Tan et al., 2019; Nurek & Michalski, 2019), molecular interaction (Huang et al., 2020; Spalević et al., 2020), and problems in Electronic Design Automation (EDA) (Ma et al., 2020; Ma et al., 2019), etc. As a result, GNN has attracted a great deal of research interest in deep learning community for both software (SW) (Wu et al., 2019; Li et al., 2015) and hardware (HW) (Wang et al., 2020; Zeng & Prasanna, 2020).
Similar to many other neural networks, the performance of GNN significantly depends on its neural architecture, and hence considerable effort has been put into tuning its computational components (Hamilton et al., 2017). Among the existing algorithms, message-passing has set the ground of spatial-based convolutional graph neural networks, from which most recent breakthough are derived (Gilmer et al., 2017). As the algorithmic variation increases, to identify better sub-structures of GNN tends to be substantially challenging due to the design space exponentially grows. On the other hand, however, the improvement of feature-extracting ability is still highly demanded.
Soon after being proposed by Zoph & Le (2016), neural architecture search has become a mainstream research topic of machine learning. It has been demonstrated NAS is promising to surpass the human experts and meanwhile liberate their laborious effort (Chen et al., 2018). Although the original NAS using reinforcement learning method suffers from timing inefficiency problem that following works strived to solve (Yan et al., 2019; Liu et al., 2019), it is well established thus adapted to be used for searching novel GNNs.
Quite lately, Gao et al. (2019) has designed the first graph NAS framework. Based on the stateof-art GNN methodology, Graph NAS has formulated the layered design space that is perferred to the controller. Besides, parameter sharing strategy is also adopted. Coincidentally, Zhou et al. (2019) has also used reinforcement learning to automate graph neural network design on similar search space but with split controllers. The search process is well guided in an incremental manner such that the sampling efficiency is boosted. Both of these works have improved the accuracy of GNN against existing hand-crafted networks, indicating NAS is the future solution for graph-based learning.
However, these works are only focusing on the neural architecture while the hardware implementation for GNNs (Geng et al., 2019) is equally important to the final performance. The hardware-aware NAS has been widely discussed for CNNs (Zhang et al., 2020; Wang et al., 2018). But, to our best knowledge, joint search of hardware and GNN architectures have not publicly reported. In this paper, we use Graph NAS with the hardware design objective and propose a software-hardware co-design framework. We employ FPGA as the vehicle for illustration and implementation of our methods. Specific hardware constraints are considered so quantization is adopted to compress the model. Under specific hardware constraints, we show our framework can successfully identify a solution of higher accuracy but using shorter time than random search and the traditional two-step tuning.
2 PROBLEM FORMULATION
The problem of jointly searching graph neural network architectures and hardware design can be formulated as the following. Given an architecture space A, each sample a ∈ A characterizes a hardware space H(a). The objective is then to find the optimal architecture and hardware design pair 〈a∗, h∗〉 such that a∗ ∈ A and h ∈ H(a∗). With the target dataset Dt for training and Dv for validation, the accuracy of a design can be measured as acct(a, h) and accv(a, h), respectively, while the hardware performance hp(a, h) is independent of the data. As the neural architecture sample is parameterized by the weights w, we define the optimality of the design as
a∗ = arg max a∈A accv(a(w ∗), h∗)
s.t. : w∗ = arg max w acct(a(w), h ∗)
(1)
and at the same time
h∗ = arg max h∈H(a∗) hp(a∗, h) s.t. : hp(a∗, h∗) ≥ spec (2)
where spec is the hardware specification required to be satisfied by the design.
However, there is a problem with the above formulation that is challenging to implementation. In the case where the specification of hardware relate to multiple objectives, e.g. area and latency, the hardware performance is not a scalar and hence the optimization is ambiguous. In practice, the design is acceptable as long as the hardware constraints are met. In order to optimize the hardware design, one can set more and more strict constraints to the aspect of interest. Therefore, we relax the optimization of hardware performance to the hardware eligibility, and reformulate the problem as
a∗ = arg max a∈A accv(a(w ∗), h)
s.t. : w∗ = arg max w
acct(a(w), h) (3)
and
∃h ∈ H(a∗) s.t. : hp(a∗, h) ≥ spec. (4)
It is worth mentioning when the hardware constraint has two and more dimensions, the ≥ symbol applies to every dimension.
In this work, we rely on the recurrent neural network to jointly optimize both the GNN architecture and its hardware design. As such, the reinforcement learning NAS framework is restructured to coexploring the software and hardware spaces. Based on the above formulation, our framework aims to discover the best neural architectures which are guaranteed to be implementable.
3 FGNAS
In this section, we delve into the details of our FPGA-aware graph nerual architecture search (FGNAS) framework. As shown in Figure 1, there are three main components comprising FGNAS,
namely the controller, the FPGA model builder, and the gnn model trainer. For each layer of the child network, our controller generates the parameters of three types defining the network topology, hardware realization, and the precision. With each sample of the controller, a hardware model will be firstly constructed and evaluated against the predefined constraints. Since most samples may not be implementable, their training are circumvented and rewards assigned to be 0; otherwise the network will be built, trained and validated. Finally, when a mini-batch of samples are evaluated, the parameters of the controller will be updated once. The process terminates after a certain number of episodes.
3.1 SEARCH SPACE
We divide the search space into two sub-spaces: architectural space and hardware space. For each layer of a GNN, the search spaces are completely the same so the same types of parameters are sampled. For illustration convenience, we divide the parameters of a single layer and describe them as follows.
3.1.1 ARCHITECTURAL SPACE
The architectural space contains the parameters that defines the operational mechanism of graph network. As the time of writing, the topologies of GNNs share message-passing computational flow characterized by graph-wise convolution, and only vary in the way embedded features are generated and combined. In consequence, we define the architecture space regarding the tuning of sub-structures.
Basically, three separate stages are cascaded in each layer: (1) the embedding form last layer are linearly converted; (2) messages between each connected pair of nodes are weighted; and (3) new features of neig‘hbouring nodes are aggregated to produce new embedding. Following the three operations, five parameters are included in the architectural space.
• Embedding Dimension. The embedding represents the features of the nodes extracted by the hidden layers. A linear operation is applied to convert the previous embedding into another space of d dimensions.
• Attention Type. The attention type refers to how the messages between connected nodes are weighted. For the new temporary embedding Hki,j , a coefficient is firstly computed for weighting it during the aggregation phase.
• Aggregation Type. For all the incoming messages, there different ways in mixing them to produce the new features. The common methods are namely, taking the summation, mean, and maximum.
• Number of Heads. We apply multi-headed attention to the GNN architecture as it is commonly used to stablize the performance. Heads of the same message are concatenated for every layer except the last one where they are averaged to match the output dimension.
• Activation Function. The activation function can add nonlinearity to the embedding. Considering the hardware constraints, we include four options for nonlinearity: “relu”, “tanh”, “sigmoid”, and “elu”.
3.1.2 HARDWARE SPACE
The computation of GNN for inference are all parallelizable in terms of the features of the same embedding. As a large dimension would require exponentially complex computation, it is necessary to divide the vector-wise operation into sub-tasks. Therefore, we choose the size for grouping the features as a key parameter to scale the hardware.
Almost all the main tasks can be divided, and we summarize them into four cases:
1. For the embedding to transform from Ti to To features, two parameters ti and to are used for grouping them separately.
2. The attention coefficients possibly also require linear operation but the output is a scalar, so we only divide the input by size of tattn.
3. The aggregation is similar to the above case in that there is only one output. We also assign one parameter taggr for it.
4. Lastly, the nonlinearity requires one-to-one operation on the feature vector. As this is probably the most challenging operation for hardware, we also group the features into size of tact.
In addition to the architectural and hardware space, we also consider the mixed-precision design which play important roles in both software and hardware performance. In this case, the quantization space also needs to be explored and details is discussed in Section 3.4.
3.2 ALGORITHM
Reinforcement learning is applied in our design as the searching backbone. As we have parameterized the design of both architecture and hardware and formatted these parameters by layer, one RNN can be employed to sample the parameters sequentially as actions from the respective list of options. For the sampled design, the hardware performance is analyzed using our FPGA model, under the constraints of resources and latency. Only if the sample hardware design satisfy the hardware specifications, will the software design be trained and tested on the dataset. The reward for the sample 〈a, h〉 is then
R(a, h) =
{ 0, hp(a, h) < spec
accv(a, h), otherwise (5)
This way, the training can be circumvented as possible and the search can be faster than pure NAS.
Once the reward is obtained, the parameter θ of the controller is updated following the policy gradient rule (Williams, 1992):
∇J(θ) = 1 m m∑ k=1 T∑ t=1 γT−t∇θ log πθ(at|a(t−1):1)(Rk − b) (6)
where J(θ) is the expected reward at the initial step.
The controller is configured as the following. The number of steps T equals the total number of parameters to be sampled; the batch size for updating θ is m = 5 episodes; the reward is not discounted so γ = 1; and baseline b is the exponential moving average of the reward with a decaying factor of 0.9.
3.3 FPGA MODELING
We adopt a generic FPGA design model that is widely used for CNN acclerators (Zhang et al., 2015). Figure 2 illustrates the block diagram of the hardware segment for one layer. For each layer four stages are pipelined consisting of the linear transform, attention coefficient computation, aggregation, and nonlinear operation. The messages in-between consecutive stages are registered. Two buffers are employed to resolve the read/write conflict by alternately accessing the main memory and serving the computational units. As mentioned above, this model is fully scalable in the dimension of the embedded features baded on the parameteres defined.
3.4 MIXED PRECISION
We also consider the mixed-precision scenario in our design where data are quantized using different bit width. Like the other parameters, quantization parameters are also arranged by layer so data in the same layer share the same format. As the methods for quantizing is plentiful and have significant impact on the model accuracy, we avoid the variation of them and simply adopt the post-training quantization (PTQ) and linear quantization as follows.
Given the quantization interval ∆ and range bounded by Bmin and Bmax, the quantization of real number x is
x̂ = clip(bx/∆e ×∆, Bmin, Bmax), (7)
where be is rounding to integers. For the fixed-point format with sign, ∆, Bmin and Bmax are determined by the number of bits allocated to the integral (bi) and fractional (bf ) part as
∆ = 2−bf , Bmin = −2bi, Bmax = 2bi −∆. (8)
Consequently, in the mixed-precision design, four parameters are added to the search space namely wi, wf for the weights and ai and af for the activation. With the mixed precision, the hardware space exponentially increases, and the components in our FPGA model requires to be configured by bitwidth. We rely on the HLS tool of Xilinx to synthesize all configurations to profile the sizes and latency information. The synthesis result of sample operational units are shown in the supplimental material. It is noted the impact of quantization on hardware significantly vary among operators.
4 EXPERIMENT
In this section we test the performance of FGNAS on holdout graph datasets of node classification utility. To study the search efficiency, both of the test accuracy and searching time are evaluated and compared. The experiments are carried out using single Nvidia 1080Ti graphic processing unit (GPU), and Intel 8700K CPU. There is no dedication for FPGA chips, but we use Xinlix devices for reference. We assume the clock rate is 100 MHz throughout all the experiments. It is noted that since we constraints the hardware, comparing the accuracy to the state-of-art networks are not quite sensible and instead we evaluate the searching efficiency against baseline methods.
4.1 DATASET
Three datasets are used for benchmarking the performance on transductive learning, namely Cora, CiteSeer, and PubMed. The statistics and training configuration is listed in Table 2. The setting for training on these datasets follows that of Zhou et al. (2019). Since the volumns and complexity of the datasets vary largely, the hardware of the search is constrained differently and accordingly.
4.2 BASELINE METHOD
To evaluate the ability and efficiency of FGNAS, two methods are considered as baseline and experimented in parallel with our method.
Random Search. We perform a random search approach as the baseline of search efficiency. The random search results can reflect the distribution of candidate solutions in specific design space. For certain data and hardware constraints, the random search can render decent result already.
Separate Search. The traditional method of two-phase design philosophy cannot fully explore the design space joined by hardware and architectural subspaces. In this philosophy, a fixed pure network architecture is firstly selected (by handcraft or automation), and afterwards a hardware design is customized for this specific architecture. Therefore, it explores only a fraction of the design space containing every combination of architecture-hardware pair.
To show the advantage of our co-design method over the separate design, we follow the above pipeline and partially use our framework to perform a pure architecture search followed by a pure hardware search based on the best network found.
4.3 SEARCHING DETAILS
The actual search space used throughput the experiments are shown in Table 1. During the search, the controller is updated with ordinary SGD algorithm and a constant learning rate of 0.1. When a child network is sampled and hardware verified, it will be trained using Adam optimizer for 200 epochs. The validation is performed after every epoch, from which the highest will be taken as the reward to the controller. By rule of thumb, we set the depth of the child networks as two layers.
The searching stops after sampling 2000 episodes. With hardware constraints, however, most samples in both joint and random search may not be valid so the training can be saved. Consequently for fair comparison, we use the total number of trained samples to guide the random search such that the GPU hours would be on the same scale. In the case of separate search, the GPU time is completely defined by the episode quantity and we set 200 as for the architecture search and 800 for the hardware search. Each experiment includes 5 runs, and the one with the highest test accuracy is taken for evaluation. With the selected run, we report the accuracy of both the best sample as well as the top-10 samples averaged.
4.4 PERFORMANCE
We test the seaching efficiency of our method across variational hardware parameters in latency, number of LUTs/FFs and number DSPs. The result on Cora is shown in Table 3. In general, the joint search ahieves the best accuracy and shortes searching time while there exists variance.
4.4.1 COMPARING WITH RANDOM SEARCH
The random search is already very performant in the sense that the highest accuracy are discoverable at certain hardware constraints. For example, with 1 ms latency, 100,000 LUTs/FFs and 100 DSPs, it achieves the best accuracy among the three methods. However, when the constraints are more narrow the distribution of decent sampels are far more sparse. As a result, the best accuracy covered by searching a fixed number of samples is lower than the other two methods.
The search time of random method is around 1x to 2x of the joint search. There are two explanations for that. Firstly, the sampled newtorks are more scattered so their average size is larger. Although the GPU calls are equal, the training time of randomly sampled networks are higher. Another reason is that in order to reach the same number of implementable samples as joint search, much more episodes needs to be inspected so the CPU time adds up to a coniderable level.
4.4.2 COMPARING WITH SEPARATE SEARCH
The separate search consumes highest time with our setting because 1) more sampels are actually trained due to the manual setup; and 2) the architecture found in the first step is larger than average size. It is observed that accuracy is slightly better than random search and in some cases surpass the joint search. However, since the pure architecture search are not aware of the hardware constraints at all, the post-quantization accuracy may degrade severly as decent bit width allocation hardly exists.
4.4.3 IN-DEPTH OBSERVATION
The experimental results concludes that our method exlpores the design space more efficiently than the baselines. It achieves the best accuracy in most hardware cases while runs 1x - 3x faster. The advantage owes to the fact the SW/HW co-search explores the design space in a local region approaching to the constrained area. Figure 3 plots actual hardware statistics of the searched samples projected onto three usage-latency planes. It is shown the Pareto frontier of the joint search method is closet the valid area constrained by the hardware among all the methods.
5 CONCLUSION
Neural architecture search is a promising solution for the advancentment of graph neural network engineering, but it lacks hardware awareness. In this work we propose to an FPGA-based SW/HW co-design framework, named FGNAS, that jointly explores the architectural and hardware spaces. Using reinforcemnt learning, generic hardware model, and mixed precision design, FGNAS performs evidently more efficient than the random search and traditional separate methods. Under different hardware constraints, FGNAS has the best accuracy in majority of the cases with 1x-3x faster running time. Besides, the cause of the advantage is discussed from statistical analysis. | 1. What is the focus of the paper regarding GNN NAS?
2. What are the strengths and weaknesses of the proposed approach in terms of its impact and technical contributions?
3. How does the reviewer assess the significance of the paper's contribution to the field, particularly in comparison to prior works?
4. What are the questions raised by the reviewer regarding the hardware-software co-design and its synergy?
5. Are there any concerns or suggestions regarding the clarity and detail of the explanations provided in the paper? | Review | Review
While previous GNN NAS work is purely at the software level, this paper claims to be the first that takes hardware constraints into consideration in the NAS process. Such a software-hardware co-design often leads to a better-searched model (i.e., with higher accuracy). Overall, I think this paper is targeting an important problem, but I do not see enough technical contributions that make it a topic conference paper.
Why using FPGAs as your hardware backends? Can your framework be applied to other backends like CPUs and GPUs? Would GPUs and CPUs be better choices considering that regular uses would have direct access to them?
Compare to previous work on hardware-aware for CNNS, what are the new challenges and opportunities incurred by GNNs? Could the previous infrastructure be applied to GNN NAS? What kind of extensions needs to be made? Key technical contributions need to be better articulated.
The hardware-software co-design parts are not clear. In particular, I do not get the synergy between the software and hardware design. The hardware designs are more pretty much ad-hoc. Could you elaborate on the insights behind your co-designs? For instance, when/why would you get better performance compared to the separate search as shown in Table3? A more detailed explanation is needed. |
ICLR | Title
FGNAS: FPGA-Aware Graph Neural Architecture Search
Abstract
The success of graph neural networks (GNNs) in the past years has aroused growing interest and effort in designing best models to handle graph-structured data. As the neural architecture search (NAS) technique has been witnessed to rival against human experts in discovering efficient network topology, recently, it has been applied to the field of graphic network engineering. However, such works on graphic NAS so far are purely software (SW) design and not considering hardware (HW) constraints at all, which often leads to sub-optimal system performance. To address this problem, we propose the first SW-HW co-design framework for automating the search and deployment of GNNs. Using FPGA as the target platform, our framework is able to perform the FPGA-aware graph neural architecture search (FGNAS). To evaluate our design, we experiment on benchmark datasets, namely Cora, CiteCeer, and PubMed, and the results show FGNAS has better capability in optimizing the accuracy of GNNs when their hardware implementation is specifically constrained.
1 INTRODUCTION
Graph neural networks (GNNs) are the state of the art in solving machine learning problems represented in graph forms, including social networking (Tan et al., 2019; Nurek & Michalski, 2019), molecular interaction (Huang et al., 2020; Spalević et al., 2020), and problems in Electronic Design Automation (EDA) (Ma et al., 2020; Ma et al., 2019), etc. As a result, GNN has attracted a great deal of research interest in deep learning community for both software (SW) (Wu et al., 2019; Li et al., 2015) and hardware (HW) (Wang et al., 2020; Zeng & Prasanna, 2020).
Similar to many other neural networks, the performance of GNN significantly depends on its neural architecture, and hence considerable effort has been put into tuning its computational components (Hamilton et al., 2017). Among the existing algorithms, message-passing has set the ground of spatial-based convolutional graph neural networks, from which most recent breakthough are derived (Gilmer et al., 2017). As the algorithmic variation increases, to identify better sub-structures of GNN tends to be substantially challenging due to the design space exponentially grows. On the other hand, however, the improvement of feature-extracting ability is still highly demanded.
Soon after being proposed by Zoph & Le (2016), neural architecture search has become a mainstream research topic of machine learning. It has been demonstrated NAS is promising to surpass the human experts and meanwhile liberate their laborious effort (Chen et al., 2018). Although the original NAS using reinforcement learning method suffers from timing inefficiency problem that following works strived to solve (Yan et al., 2019; Liu et al., 2019), it is well established thus adapted to be used for searching novel GNNs.
Quite lately, Gao et al. (2019) has designed the first graph NAS framework. Based on the stateof-art GNN methodology, Graph NAS has formulated the layered design space that is perferred to the controller. Besides, parameter sharing strategy is also adopted. Coincidentally, Zhou et al. (2019) has also used reinforcement learning to automate graph neural network design on similar search space but with split controllers. The search process is well guided in an incremental manner such that the sampling efficiency is boosted. Both of these works have improved the accuracy of GNN against existing hand-crafted networks, indicating NAS is the future solution for graph-based learning.
However, these works are only focusing on the neural architecture while the hardware implementation for GNNs (Geng et al., 2019) is equally important to the final performance. The hardware-aware NAS has been widely discussed for CNNs (Zhang et al., 2020; Wang et al., 2018). But, to our best knowledge, joint search of hardware and GNN architectures have not publicly reported. In this paper, we use Graph NAS with the hardware design objective and propose a software-hardware co-design framework. We employ FPGA as the vehicle for illustration and implementation of our methods. Specific hardware constraints are considered so quantization is adopted to compress the model. Under specific hardware constraints, we show our framework can successfully identify a solution of higher accuracy but using shorter time than random search and the traditional two-step tuning.
2 PROBLEM FORMULATION
The problem of jointly searching graph neural network architectures and hardware design can be formulated as the following. Given an architecture space A, each sample a ∈ A characterizes a hardware space H(a). The objective is then to find the optimal architecture and hardware design pair 〈a∗, h∗〉 such that a∗ ∈ A and h ∈ H(a∗). With the target dataset Dt for training and Dv for validation, the accuracy of a design can be measured as acct(a, h) and accv(a, h), respectively, while the hardware performance hp(a, h) is independent of the data. As the neural architecture sample is parameterized by the weights w, we define the optimality of the design as
a∗ = arg max a∈A accv(a(w ∗), h∗)
s.t. : w∗ = arg max w acct(a(w), h ∗)
(1)
and at the same time
h∗ = arg max h∈H(a∗) hp(a∗, h) s.t. : hp(a∗, h∗) ≥ spec (2)
where spec is the hardware specification required to be satisfied by the design.
However, there is a problem with the above formulation that is challenging to implementation. In the case where the specification of hardware relate to multiple objectives, e.g. area and latency, the hardware performance is not a scalar and hence the optimization is ambiguous. In practice, the design is acceptable as long as the hardware constraints are met. In order to optimize the hardware design, one can set more and more strict constraints to the aspect of interest. Therefore, we relax the optimization of hardware performance to the hardware eligibility, and reformulate the problem as
a∗ = arg max a∈A accv(a(w ∗), h)
s.t. : w∗ = arg max w
acct(a(w), h) (3)
and
∃h ∈ H(a∗) s.t. : hp(a∗, h) ≥ spec. (4)
It is worth mentioning when the hardware constraint has two and more dimensions, the ≥ symbol applies to every dimension.
In this work, we rely on the recurrent neural network to jointly optimize both the GNN architecture and its hardware design. As such, the reinforcement learning NAS framework is restructured to coexploring the software and hardware spaces. Based on the above formulation, our framework aims to discover the best neural architectures which are guaranteed to be implementable.
3 FGNAS
In this section, we delve into the details of our FPGA-aware graph nerual architecture search (FGNAS) framework. As shown in Figure 1, there are three main components comprising FGNAS,
namely the controller, the FPGA model builder, and the gnn model trainer. For each layer of the child network, our controller generates the parameters of three types defining the network topology, hardware realization, and the precision. With each sample of the controller, a hardware model will be firstly constructed and evaluated against the predefined constraints. Since most samples may not be implementable, their training are circumvented and rewards assigned to be 0; otherwise the network will be built, trained and validated. Finally, when a mini-batch of samples are evaluated, the parameters of the controller will be updated once. The process terminates after a certain number of episodes.
3.1 SEARCH SPACE
We divide the search space into two sub-spaces: architectural space and hardware space. For each layer of a GNN, the search spaces are completely the same so the same types of parameters are sampled. For illustration convenience, we divide the parameters of a single layer and describe them as follows.
3.1.1 ARCHITECTURAL SPACE
The architectural space contains the parameters that defines the operational mechanism of graph network. As the time of writing, the topologies of GNNs share message-passing computational flow characterized by graph-wise convolution, and only vary in the way embedded features are generated and combined. In consequence, we define the architecture space regarding the tuning of sub-structures.
Basically, three separate stages are cascaded in each layer: (1) the embedding form last layer are linearly converted; (2) messages between each connected pair of nodes are weighted; and (3) new features of neig‘hbouring nodes are aggregated to produce new embedding. Following the three operations, five parameters are included in the architectural space.
• Embedding Dimension. The embedding represents the features of the nodes extracted by the hidden layers. A linear operation is applied to convert the previous embedding into another space of d dimensions.
• Attention Type. The attention type refers to how the messages between connected nodes are weighted. For the new temporary embedding Hki,j , a coefficient is firstly computed for weighting it during the aggregation phase.
• Aggregation Type. For all the incoming messages, there different ways in mixing them to produce the new features. The common methods are namely, taking the summation, mean, and maximum.
• Number of Heads. We apply multi-headed attention to the GNN architecture as it is commonly used to stablize the performance. Heads of the same message are concatenated for every layer except the last one where they are averaged to match the output dimension.
• Activation Function. The activation function can add nonlinearity to the embedding. Considering the hardware constraints, we include four options for nonlinearity: “relu”, “tanh”, “sigmoid”, and “elu”.
3.1.2 HARDWARE SPACE
The computation of GNN for inference are all parallelizable in terms of the features of the same embedding. As a large dimension would require exponentially complex computation, it is necessary to divide the vector-wise operation into sub-tasks. Therefore, we choose the size for grouping the features as a key parameter to scale the hardware.
Almost all the main tasks can be divided, and we summarize them into four cases:
1. For the embedding to transform from Ti to To features, two parameters ti and to are used for grouping them separately.
2. The attention coefficients possibly also require linear operation but the output is a scalar, so we only divide the input by size of tattn.
3. The aggregation is similar to the above case in that there is only one output. We also assign one parameter taggr for it.
4. Lastly, the nonlinearity requires one-to-one operation on the feature vector. As this is probably the most challenging operation for hardware, we also group the features into size of tact.
In addition to the architectural and hardware space, we also consider the mixed-precision design which play important roles in both software and hardware performance. In this case, the quantization space also needs to be explored and details is discussed in Section 3.4.
3.2 ALGORITHM
Reinforcement learning is applied in our design as the searching backbone. As we have parameterized the design of both architecture and hardware and formatted these parameters by layer, one RNN can be employed to sample the parameters sequentially as actions from the respective list of options. For the sampled design, the hardware performance is analyzed using our FPGA model, under the constraints of resources and latency. Only if the sample hardware design satisfy the hardware specifications, will the software design be trained and tested on the dataset. The reward for the sample 〈a, h〉 is then
R(a, h) =
{ 0, hp(a, h) < spec
accv(a, h), otherwise (5)
This way, the training can be circumvented as possible and the search can be faster than pure NAS.
Once the reward is obtained, the parameter θ of the controller is updated following the policy gradient rule (Williams, 1992):
∇J(θ) = 1 m m∑ k=1 T∑ t=1 γT−t∇θ log πθ(at|a(t−1):1)(Rk − b) (6)
where J(θ) is the expected reward at the initial step.
The controller is configured as the following. The number of steps T equals the total number of parameters to be sampled; the batch size for updating θ is m = 5 episodes; the reward is not discounted so γ = 1; and baseline b is the exponential moving average of the reward with a decaying factor of 0.9.
3.3 FPGA MODELING
We adopt a generic FPGA design model that is widely used for CNN acclerators (Zhang et al., 2015). Figure 2 illustrates the block diagram of the hardware segment for one layer. For each layer four stages are pipelined consisting of the linear transform, attention coefficient computation, aggregation, and nonlinear operation. The messages in-between consecutive stages are registered. Two buffers are employed to resolve the read/write conflict by alternately accessing the main memory and serving the computational units. As mentioned above, this model is fully scalable in the dimension of the embedded features baded on the parameteres defined.
3.4 MIXED PRECISION
We also consider the mixed-precision scenario in our design where data are quantized using different bit width. Like the other parameters, quantization parameters are also arranged by layer so data in the same layer share the same format. As the methods for quantizing is plentiful and have significant impact on the model accuracy, we avoid the variation of them and simply adopt the post-training quantization (PTQ) and linear quantization as follows.
Given the quantization interval ∆ and range bounded by Bmin and Bmax, the quantization of real number x is
x̂ = clip(bx/∆e ×∆, Bmin, Bmax), (7)
where be is rounding to integers. For the fixed-point format with sign, ∆, Bmin and Bmax are determined by the number of bits allocated to the integral (bi) and fractional (bf ) part as
∆ = 2−bf , Bmin = −2bi, Bmax = 2bi −∆. (8)
Consequently, in the mixed-precision design, four parameters are added to the search space namely wi, wf for the weights and ai and af for the activation. With the mixed precision, the hardware space exponentially increases, and the components in our FPGA model requires to be configured by bitwidth. We rely on the HLS tool of Xilinx to synthesize all configurations to profile the sizes and latency information. The synthesis result of sample operational units are shown in the supplimental material. It is noted the impact of quantization on hardware significantly vary among operators.
4 EXPERIMENT
In this section we test the performance of FGNAS on holdout graph datasets of node classification utility. To study the search efficiency, both of the test accuracy and searching time are evaluated and compared. The experiments are carried out using single Nvidia 1080Ti graphic processing unit (GPU), and Intel 8700K CPU. There is no dedication for FPGA chips, but we use Xinlix devices for reference. We assume the clock rate is 100 MHz throughout all the experiments. It is noted that since we constraints the hardware, comparing the accuracy to the state-of-art networks are not quite sensible and instead we evaluate the searching efficiency against baseline methods.
4.1 DATASET
Three datasets are used for benchmarking the performance on transductive learning, namely Cora, CiteSeer, and PubMed. The statistics and training configuration is listed in Table 2. The setting for training on these datasets follows that of Zhou et al. (2019). Since the volumns and complexity of the datasets vary largely, the hardware of the search is constrained differently and accordingly.
4.2 BASELINE METHOD
To evaluate the ability and efficiency of FGNAS, two methods are considered as baseline and experimented in parallel with our method.
Random Search. We perform a random search approach as the baseline of search efficiency. The random search results can reflect the distribution of candidate solutions in specific design space. For certain data and hardware constraints, the random search can render decent result already.
Separate Search. The traditional method of two-phase design philosophy cannot fully explore the design space joined by hardware and architectural subspaces. In this philosophy, a fixed pure network architecture is firstly selected (by handcraft or automation), and afterwards a hardware design is customized for this specific architecture. Therefore, it explores only a fraction of the design space containing every combination of architecture-hardware pair.
To show the advantage of our co-design method over the separate design, we follow the above pipeline and partially use our framework to perform a pure architecture search followed by a pure hardware search based on the best network found.
4.3 SEARCHING DETAILS
The actual search space used throughput the experiments are shown in Table 1. During the search, the controller is updated with ordinary SGD algorithm and a constant learning rate of 0.1. When a child network is sampled and hardware verified, it will be trained using Adam optimizer for 200 epochs. The validation is performed after every epoch, from which the highest will be taken as the reward to the controller. By rule of thumb, we set the depth of the child networks as two layers.
The searching stops after sampling 2000 episodes. With hardware constraints, however, most samples in both joint and random search may not be valid so the training can be saved. Consequently for fair comparison, we use the total number of trained samples to guide the random search such that the GPU hours would be on the same scale. In the case of separate search, the GPU time is completely defined by the episode quantity and we set 200 as for the architecture search and 800 for the hardware search. Each experiment includes 5 runs, and the one with the highest test accuracy is taken for evaluation. With the selected run, we report the accuracy of both the best sample as well as the top-10 samples averaged.
4.4 PERFORMANCE
We test the seaching efficiency of our method across variational hardware parameters in latency, number of LUTs/FFs and number DSPs. The result on Cora is shown in Table 3. In general, the joint search ahieves the best accuracy and shortes searching time while there exists variance.
4.4.1 COMPARING WITH RANDOM SEARCH
The random search is already very performant in the sense that the highest accuracy are discoverable at certain hardware constraints. For example, with 1 ms latency, 100,000 LUTs/FFs and 100 DSPs, it achieves the best accuracy among the three methods. However, when the constraints are more narrow the distribution of decent sampels are far more sparse. As a result, the best accuracy covered by searching a fixed number of samples is lower than the other two methods.
The search time of random method is around 1x to 2x of the joint search. There are two explanations for that. Firstly, the sampled newtorks are more scattered so their average size is larger. Although the GPU calls are equal, the training time of randomly sampled networks are higher. Another reason is that in order to reach the same number of implementable samples as joint search, much more episodes needs to be inspected so the CPU time adds up to a coniderable level.
4.4.2 COMPARING WITH SEPARATE SEARCH
The separate search consumes highest time with our setting because 1) more sampels are actually trained due to the manual setup; and 2) the architecture found in the first step is larger than average size. It is observed that accuracy is slightly better than random search and in some cases surpass the joint search. However, since the pure architecture search are not aware of the hardware constraints at all, the post-quantization accuracy may degrade severly as decent bit width allocation hardly exists.
4.4.3 IN-DEPTH OBSERVATION
The experimental results concludes that our method exlpores the design space more efficiently than the baselines. It achieves the best accuracy in most hardware cases while runs 1x - 3x faster. The advantage owes to the fact the SW/HW co-search explores the design space in a local region approaching to the constrained area. Figure 3 plots actual hardware statistics of the searched samples projected onto three usage-latency planes. It is shown the Pareto frontier of the joint search method is closet the valid area constrained by the hardware among all the methods.
5 CONCLUSION
Neural architecture search is a promising solution for the advancentment of graph neural network engineering, but it lacks hardware awareness. In this work we propose to an FPGA-based SW/HW co-design framework, named FGNAS, that jointly explores the architectural and hardware spaces. Using reinforcemnt learning, generic hardware model, and mixed precision design, FGNAS performs evidently more efficient than the random search and traditional separate methods. Under different hardware constraints, FGNAS has the best accuracy in majority of the cases with 1x-3x faster running time. Besides, the cause of the advantage is discussed from statistical analysis. | 1. What is the focus of the paper regarding hardware-algorithm codesign for graph neural networks?
2. What are the strengths and weaknesses of the proposed approach, particularly in its evaluation and scalability?
3. How does the reviewer assess the novelty and potential impact of the paper's contribution?
4. Are there any questions or concerns regarding the paper's methodology, results, or conclusions? | Review | Review
This paper presents a hardware-algorithm codesign NAS framework for graph neural networks. The NAS algorithm considers hardware constraints and is able to explore an architecture-accelerator pair for the targeting task. The paper is nicely presented and the method is novel, however, the presented results are limited to small datasets and makes it hard to evaluate the scalability and effectiveness of the proposed approach.
Strength:
The paper is written nicely and easy to follow.
The idea of building a hardware-algorithm codesign NAS for GNNs is novel.
Since GNNs are pervasive in solving graph related tasks, designing a systematical approach to accelerate them on reconfigurable devices is surely an important research direction.
Weakness
My main concern about this work is its evaluated datasets. The considered citation datasets are too small. It is not persuading to show hardware performance on small datasets like this and this also does not show scalability of your proposed approach. The authors could consider larger datasets presented in the GraphSaint paper [1].
What is the major GNN application you are focusing on? If you are interested in doing fast-inference, which seems like to be the case because you are not exploiting batch-wise parallelism, what is the application? Do you intend to use GNNs on mobile systems? Or you want to provide faster inference for cloud applications? Different use scenarios indicate very different FPGA platforms and might heavily impact the results.
I have concerns regarding the scalability of the approach when the dataset is larger. The proposed RL-based NAS iteratively builds FPGA implementations using HLS. It is a known issue that hardware synthesis with HLS is very time-consuming, if the searched GNN becomes more complex, the hardware mapping might take a large amount of time and the NAS algorithm might take a very long time to converge.
My suggestions & confusions:
Do you do mapping to actual FPGA devices? What is the time cost? What if your synthesis results do not agree with your hardware fitter? Is there a case when synthesis passes but hardware mapping fails so that the design cannot be actually mapped or have to be mapped with slower clock rates? This is possible due to things like large fanouts, routing congestions and etc..
Why the accuracy of the FGNAS is not matching state of the art? I realised you mentioned that you have hardware constraints, but theoretically speaking, if your hardware design is flexible enough and you give a large enough hardware budget for FGNAS, the results in Table 4 should match GraphNAS [2], AutoGNN [3] and PDNAS [4]. The performance gap between FGNAS and other Graph NAS on Cora is huge (>10%). Is this performance gap simply from having the hardware constraints? Or is it because of the more complex dual-optimisation in the hardware space? Or is it because of the lossy quantisation? I would recommend you to assign infinite hardware resources and perform FGNAS to see whether it is actually on par with other graph NAS methods. If we have an accuracy vs hardware resources vs latency curve, you are not exploring the whole range of possible accuracies?
When you say ‘it is necessary to divide the vector-wise operation into sub-tasks’, is this simply loop-tiling or am I misunderstanding? Effectively I assume you are picking tile sizes here, please correct me if I am wrong.
It might be worth to show the quantisation search space in main text.
Some minor writing mistakes on Page 3: ‘gnn model’, ‘nei’hbouring’
In general, I like the idea of this paper and am happy to change my scores if the authors can address my major concerts in a) larger datasets, b) explain further the motivation with respect to targeting applications and c) show FGNAS has the same performance to state of the art NAS when given a large hardware budget. |
ICLR | Title
Recovering the Lowest Layer of Deep Networks with High Threshold Activations
Abstract
Giving provable guarantees for learning neural networks is a core challenge of machine learning theory. Most prior work gives parameter recovery guarantees for one hidden layer networks, however, the networks used in practice have multiple non-linear layers. In this work, we show how we can strengthen such results to deeper networks – we address the problem of uncovering the lowest layer in a deep neural network under the assumption that the lowest layer uses a high threshold before applying the activation, the upper network can be modeled as a well-behaved polynomial and the input distribution is gaussian.
1 INTRODUCTION
Understanding the landscape of learning neural networks has been a major challege in machine learning. Various works gives parameter recovery guarantees for simple one-hidden-layer networks where the hidden layer applies a non-linear activation u after transforming the input x by a matrix W, and the upper layer is the weighted sum operator: thus f(x) = ∑ aiu(w T i x). However, the networks used in practice have multiple non-linear layers and it is not clear how to extend these known techniques to deeper networks.
We consider a multilayer neural network with the first layer activation u and the layers above represented by an unknown polynomial P such that it has non-zero non-linear components. More precisely, the function f computed by the neural network is as follows:
fW(x) = P (u(w T 1 x), u(w T 2 x), . . . , u(w T d x)) for P (X1, . . . , Xd) = ∑ r∈Zd+ cr · ∏ j X rj j .
We assume that the input x is generated from the standard Gaussian distribution and there is an underlying true network (parameterized by some unknown W∗)1 from which the labels are generated.
In this work we strengthen previous results for one hidden layer networks to a larger class of functions representing the transform made by the upper layer functions if the lowest layer uses a high threshold (high bias term) before applying the activation: u(a − t) instead of u(a). Intuitively, a high threshold is looking for a high correlation of the input a with a direction w∗i . Thus even if the function f is applying a complex transform after the first layer, the identity of these high threshold directions may be preserved in the training data generated using f .
Learning with linear terms in P . Suppose P has a linear component then we show that increasing the threshold t in the lowest layer is equivalent to amplifying the coefficients of the linear part. Instead of dealing with the polynomial P it turns out that we can roughly think of it as P (µX1, ..., µXd) where µ decreases exponentially in t (µ ≈ e−t 2
). As µ decreases it has the effect of diminishing the non-linear terms more strongly so that relatively the linear terms stand out. Taking advantage of this effect we manage to show that if t exceeds a certain threshold the non linear terms drop in value enough so that the directions wi can be learned by relatively simple methods. We show that we can get close to the wi applying a simple variant of PCA. While an application of PCA can be thought of as finding principal directions as the local maxima of max||z||=1 E[f(x)(zTx)2],
1We suppress W when it is clear from context.
we instead perform maxE[f(x)H2(zTx)2]=1 E[f(x)H4(zTx)4]]2. If W∗ has a constant condition number then the local maxima can be used to recover directions that are transforms of wi. Theorem 1 (informal version of Claim 2, Theorem 11). If t > c √ log d for large enough constant c > 0 and P has linear terms with absolute value of coefficients at least 1/poly(d) and all coefficients at most O(1), we can recover the weight vector wi within error 1/poly(d) in time poly(d).
These approximations of wi obtained collectively can be further refined by looking at directions along which there is a high gradient in f ; for monotone functions we show how in this way we can recover wi exactly (or within any desired precision. Theorem 2. (informal version of Theorem 5) Under the conditions of the previous theorem, for monotone P , there exists a procedure to refine the angle to precision in time poly(1/ , d) starting from an estimate that is 1/poly(d) close.
The above mentioned theorems hold for u being sign and ReLU.3
When P is monotone and u is the sign function, learning W is equivalent to learning a union of half spaces. We learn W∗ by learning sign of P which is exactly the union of halfspaces wTi x = t. Thus our algorithm can also be viewed as a polynomial time algorithm for learning a union of large number of half spaces that are far from the origin – to our knowledge this is the first polynomial time algorithm for this problem but with this extra requirement (see earlier work Vempala (2010) for an exponential time algorithm). Refer to Appendix B.6 for more details.
Such linear components in P may easily be present: consider for example the case where P (X) = u(vTX − b) where u is say the sigmoid or the logloss function. The taylor series of such functions has a linear component – note that since the linear term in the taylor expansion of u(x) has coefficient u′(0), for expansion of u(x−b) it will be u′(−b) which is Θ(e−b) in the case of sigmoid. In fact one may even have a tower (deep network) or such sigmoid/logloss layers and the linear components will still be present – unless they are made to cancel out precisely; however, the coefficients will drop exponentially in the depth of the networks and the threshold b.
Sample complexity with low thresholds and no explicit linear terms. Even if the threshold is not large or P is not monotone, we show that W∗ can be learned with a polynomial sample complexity (although possibly exponential time complexity) by finding directions that maximize the gradient of f . Theorem 3 (informal version of Corollary 1). If u is the sign function and wi’s are orthogonal then in poly(1/ , d) samples one can determine W∗ within precision if the coefficient of the linear terms in P (µ(X1 + 1), µ(X2 + 1), µ(X3 + 1), . . .) is least 1/poly(d)
Learning without explicit linear terms. We further provide evidence that P may not even need to have the linear terms – under some restricted cases (section 4), we show how such linear terms may implicitly arise even though they may be entirely apparently absent. For instance consider the case when P = ∑ XiXj that does not have any linear terms. Under certain additional assumptions we show that one can recover wi as long as the polynomial P (µ(X1 +1), µ(X2 +1), µ(X3 +1), ..) (where µ is e−t has linear terms components larger than the coefficients of the other terms). Note that this transform when applied to P automatically introduces linear terms. Note that as the threshold increases applying this transform on P has the effect of gathering linear components from all the different monomials in P and penalizing the higher degree monomials. We show that if W∗ is a sparse binary matrix then we can recover W∗ when activation u(a) = eρa under certain assumptions about the structure of P . When we assume the coefficients are positive then these results extend for binary low l1- norm vectors without any threshold. Lastly, we show that for even activations (∀a, u(a) = u(−a)) under orthogonal weights, we can recover the weights with no threshold.
Learning with high thresholds at deeper layers. We also point out how such high threshold layers could potentially facilitate learning at any depth, not just at the lowest layer. If there is any cut in the network that takes inputs X1, . . . , Xd and if the upper layers operations can be modelled by a polynomial P , then assuming the inputs Xi have some degree of independence we could use this to modularly learn the lower and upper parts of the network separately (Appendix E)
2Here H4 and H2 are the fourth and second order hermite polynomials respectively. 3Theorem 1 holds for sigmoid with t ≥ c log d.
Related Work. Various works have attempted to understand the learnability of simple neural networks. Despite known hardness results Goel et al. (2016); Brutzkus & Globerson (2017), there has been an array of positive results under various distributional assumptions on the input and the underlying noise in the label. Most of these works have focused on analyzing one hidden layer neural networks. A line of research has focused on understanding the dynamics of gradient descent on these networks for recovering the underlying parameters under gaussian input distribution Du et al. (2017b;a); Li & Yuan (2017); Zhong et al. (2017a); Zhang et al. (2017); Zhong et al. (2017b). Another line of research borrows ideas from kernel methods and polynomial approximations to approximate the neural network by a linear function in a high dimensional space and subsequently learning the same Zhang et al. (2015); Goel et al. (2016); Goel & Klivans (2017b;a). Tensor decomposition methods Anandkumar & Ge (2016); Janzamin et al. (2015) have also been applied to learning these simple architectures.
The complexity of recovering arises from the highly non-convex nature of the loss function to be optimized. The main result we extend in this work is by Ge et al. (2017). They learn the neural network by designing a loss function that allows a ”well-behaved” landscape for optimization avoiding the complexity. However, much like most other results, it is unclear how to extend to deeper networks. The only known result for networks with more than one hidden layer is by Goel & Klivans (2017b). Combining kernel methods with isotonic regression, they show that they can provably learn networks with sigmoids in the first hidden layer and a single unit in the second hidden layer in polynomial time. We however model the above layer as a multivariate polynomial allowing for larger representation. Another work Arora et al. (2014) deals with learning a deep generative network when several random examples are generated in an unsupervised setting. By looking at correlations between input coordinates they are able to recover the network layer by layer. We use some of their ideas in section 4 when W is a sparse binary matrix.
Notation. We denote vectors and matrices in bold face. || · ||p denotes the lp-norm of a vector. || · || without subscript implies the l2-norm. For matrices || · || denotes the spectral norm and || · ||F denotes the forbenius norm. N (0,Σ) denotes the multivariate gausssian distribution with mean 0 and covariance Σ. For a scalar x we will use φ(x) to denote the p.d.f. of the univariate standard normal distribution with mean zero and variance 1 .For a vector x we will use φ(x) to denote the p.d.f. of the multivariate standard normal distribution with mean zero and variance 1 in each direction. Φ denotes the c.d.f. of the standard gausssian distribution. Also define Φc = 1 − Φ. Let hi denote the ith normalized Hermite polynomial Wikipedia contributors (2018). For a function f , let f̂i denote the ith coefficient in the hermite expansion of f , that is, f̂i = Eg∼N (0,1)[f(g)hi(g)]. For a given function f computed by the neural network, we assume that the training samples (x, y) are such that x ∈ Rn is distributed according to N (0, 1) and label has no noise, that is, y = f(x). Note: Most proofs are deferred to the Appendix due to lack of space.
2 APPROXIMATE RECOVERY WITH LINEAR TERM
In this section we consider the case when P has a positive linear component and we wish to recover the parameters of true parameters W∗. The algorithm has two-steps: 1) uses existing one-hidden layer learning algorithm (SGD on carefully designed loss Ge et al. (2017)) to recover an approximate solution , 2) refine the approximate solution by performing local search (for monotone P ). The intuition behind the first step is that high thresholds enable P to in expectation be approximately close to a one-hidden-layer network which allows us to transfer algorithms with approximate guarantees. Secondly, with the approximate solutions as starting points, we can evaluate the closeness of the estimate of each weight vector to the true weight vector using simple correlations. The intuition of this step is to correlate with a function that is large only in the direction of the true weight vectors. This equips us with a way to design a local search based algorithm to refine the estimate to small error.
For simplicity in this section we will work with P where the highest degree in any Xi is 1. The degree of the overall polynomial can still be n. See Appendix B.8 for the extension to general P . More formally,
Assumption 1 (Structure of network). We assume that P has the following structure P (X1, . . . , Xk) = c0 + ∑ i∈[d] ciXi + ∑ S⊆[d]:|S|>1 cS ∏ j∈S Xj such that ci = Θ(1)
4 for all i ∈ [d] and for all S ⊆ [d] such that |S| > 1, |cS | ≤ O(1). W∗ has constant condition number.
Thus f(x) = c0 + ∑ i∈[d] ciu((w ∗ i ) Tx) + ∑ S⊆[d]:|S|>1 cS ∏ j∈S u((w ∗ j ) Tx). Denote flin(x) =
c0 + ∑ i∈[d] ciu((w ∗ i ) Tx) to be the linear part of f .
Next we will upper bound expected value of u(x): for ”high-threshold” ReLU, that is, ut(a) = max(0, a − t), Eg∼N(0,σ2)[ut(g)] is bounded by a function ρ(t, σ) ≈ e− t2
2σ2 (see Lemma 10). We also get a lower bound on |û4| in terms of ρ(t, σ) 5 This enables us to make the following assumption. Assumption 2. Activation function u is a positive high threshold activation with threshold t, that is, the bias term is t. Eg∼N(0,σ2)[ut(g)] ≤ ρ(t, σ) where ρ is a positive decreasing function of t. Also, |ûk| = tΘ(1)ρ(t, 1) for k = 2, 4. Assumption 3 (Value of t). t is large enough such that ρ(t, ||W∗||) ≈ d−η and ρ(t, 1) ≈ d−pη with for large enough constant η > 0 and p ∈ (0, 1].
For example, for high threshold ReLU, ρ(t, 1) = e−t 2/2 and µ = ρ(t, ||W∗||) = e−t2/2||W∗||2 , thus t = √ 2η log d for large enough d suffices to get the above assumption (κ(W∗) is a constant).
These high-threshold activation are useful for learning as in expectation, they ensure that f is close to flin since the product terms have low expected value. This is made clear by the following lemmas: Lemma 1. For |S| > 1, under Assumption 2 we have,
E ∏ j∈S ut((w ∗ j ) Tx) ≤ ρ(t, 1) (κ(W∗)ρ(t, ||W∗||))|S|−1 . So if µ := κ(W∗)ρ(t, ||W∗||), then E[ ∏ j∈S Xj [x]] ≤ ρ(t, 1)µ|S|−1
Lemma 2. Let ∆(x) = f(x) − flin(x). Under Assumptions 1, 2 and 3, if t is such that dρ(t, ||W∗||) ≤ c for some small enough constant c > 0 we have,
E[|∆(x)|] ≤ O ( d3ρ(t, 1)ρ(t, ||W∗||) ) = O ( d−(1+p)η+3 ) .
Note: We should point out that f(x) and flin(x) are very different point wise; they are just close in expectation under the distribution of x. In fact, if d is some constant then even the difference in expectation is some small constant.
This closeness suggests that algorithms for recovering under the labels from flin can be used to recover with labels from f approximately.
Learning One Layer Neural Networks using Landscape Design. Ge et al. (2017) proposed an algorithm for learning one-hidden-layer networks. Intuitively, the approach of Ge et al. (2017) is to design a well behaved loss function based on correlations to recover the underlying weight vectors. They show that the local minima of the following optimization corresponds to some transform of each of the w∗i – thus it can be used to recover a transform of w ∗ i , one at a time.
max z:E[flin(x)H2(zTx)]=û2
sgn(û4)E[flin(x)H4(zTx)]
which they optimize using the Lagrangian formulation (viewed as a minimization):
min z
Glin(z) := −sgn(û4)E[flin(x)H4(zTx)] + λ(E[flin(x)H2(zTx)]− û2)2
where H2(zTx) = ||z||2h2 ( zTx ||z|| ) = (z Tx)2√ 2 − ||z|| 2 √ 2 and H4(zTx) = ||z||4h4 ( zTx ||z|| ) = √ 6 (z Tx)4
12 − ||z||2(zTx)2 2 + ||z||4 4 (see Appendix A.1 for more details). Using properties
4We can handle ∈ [d−C , dC ] for some constant C by changing the scaling on t. 5For similar bounds for sigmoid and sign refer to Appendix B.7.
of Hermite polynomials, we have E[flin(x)H2(zTx)] = û2 ∑ i ci(z Tw∗i ) 2 and similarly
E[flin(x)H4(zTx)] = û4 ∑ i(z Tw∗i ) 4. Thus
Glin(z) = −|û4| ∑ i ci(z Tw∗i ) 4 + λû22 (∑ i ci(z Tw∗i ) 2 − 1 )2 .
Using results from Ge et al. (2017), it can be shown that the approximate local minima of this problem are close to columns of (TW∗)−1 where T is a diagonal matrix with Tii = √ ci.
Definition 1 (( , τ)-local minimum/maximum). z is an ( , τ)-local minimum of F if ||∇F (z)|| ≤ and λmin(∇2F (z)) ≤ τ . Claim 1 (Ge et al. (2017)). An ( , τ)-local minima of the Lagrangian formulation z with ≤ O (√ τ3/|û4| )
is such that for an index i |zTwi| = 1 ± O( /λû22) ± O(dτ/|û4|) and ∀j 6= i, |vTwj | = O( √ τ/|û4|) where wi are columns of (TW∗)−1.
Ge et al. (2017) do not mention û2 but it is necessary in the non-orthogonal weight vectors case for the correct reduction. Since for us, this value can be small, we mention the dependence.Note that these are not exactly the directions w∗i that we need, one way to think about is that we can get the correct directions by estimating all columns and then inverting.
One-hidden-layer to Deep Neural Network. Consider the loss with f instead of flin:
min z : G(z) = −sgn(û4)E[f(x)H4(zTx)] + λ(E[f(x)H2(zTx)]− û2)2
We previously showed that f is close to flin in expectation due to the high threshold property. This also implies that Glin and G are close and so are the gradients and (eignevalues of) hessians of the same. This closeness implies that the landscape properties of one approximately transfers to the other function. More formally, Theorem 4. Let Z be an ( , τ)-local minimum of functionA. If ||∇(B−A)(Z)|| ≤ ρ and ||∇2(B− A)(Z)|| ≤ γ then Z is an ( + ρ, τ + γ)-local minimum of function B and vice-versa.
We will now apply above lemma on our Glin(z) and G(z). Claim 2. For λ = Θ(|û4|/û22) ≈ dη , an ( , τ)-approximate local minima of G (for small enough , τ ≤ d−2η) is an (O(log d)d−(1+p)η+3, O(log d)d−(1+p)η+3)-approximate local minima of Glin. This implies z is such that for an index i, |zTwi| = 1 ± O(1)d−2/3pη+3 and ∀j 6= i, |zTwj | = O(1)d−1/3pη+3/2 where wi are columns of (TW∗)−1 (ignoring log d factors). Note: For ReLU, setting t = √ C log d for large enough C > 0 we can get closeness 1/poly(d) to the columns of (TW∗)−1. Refer Appendix B.7 for details for sigmoid.
The paper Ge et al. (2017) also provides an alternate optimization that when minimized simultaneously recovers the entire matrix W∗ instead of having to learn columns of (TW∗)−1 separately. We show how applying our methods can also be applied to that optimization in Appendix B.4 to recover W∗ by optimizing a single objective.
2.1 APPROXIMATE TO ARBITRARILY CLOSE FOR MONOTONE P
Assuming P is monotone, we can show that the approximate solution from the previous analysis can be refined to arbitrarily closeness using a random search method followed by approximately finding the angle of our current estimate to the true direction.
The idea at a high level is to correlate with δ′(zTx − t) where δ is the Dirac delta function. It turns out that the correlation is maximized when z is equal to one of the wi. Correlation with δ′(zTx−t) is checking how fast the correlation of f with δ(zTx−t) is changing as you change t. To understand this look at the case when our activation u is the sign function then note that correlation of ut(wTx− t) with δ′(wTx− t) is very high as its correlation with δ(wTx− t′) is 0 when t′ < t and significant when t′ > t. So as we change t’ slightly from t− to t+ there is a sudden increase. If z and w differ then it can be shown that correlation of ut(wTx− t) with δ′(zTx− t) essentially depends on cot(α) where α is the angle between w and z (for a quick intuition note that one can
prove that E[ut(wTx)δ′(zTx)] = c cot(α). See Lemma 16 in Appendix). In the next section we will show how the same ideas work for non-monotone P even if it may not have any linear terms but we only manage to prove polynomial sample complexity for finding w instead of polynomial time complexity.
In this section we will not correlate exactly with δ′(zTx− t) but instead we will use this high level idea to estimate how fast the correlation with δ(zTx − t′) changes between two specific values as one changes t′, to get an estimate for cot(α). Secondly since we can’t to a smooth optimization over z, we will do a local search by using a random perturbation and iteratively check if the correlation has increased. We can assume that the polynomial P doesn’t have a constant term c0 as otherwise it can easily be determined and cancelled out6.
We will refine the weights one by one. WLOG, let us assume that w∗1 = e1 and we have z such that zTw∗1 = z1 = cos −1(α1). Let l(z, t, ) denote {x : zTx ∈ [t− , t]} for z ∈ Sn−1.
Algorithm 1 RefineEstimate 1: Run EstimateTanAlpha on z to get s = tan(α) where α is the angle between z and w∗1 . 2: Perturb current estimate z by a vector along the d− 1 dimensional hyperplane normal to z with
the distribution n(0,Θ(α/d))d−1 to get z′. 3: Run EstimateTanAlpha on z′ to get s′ = tan(α′) where α′ is the angle between z′ and w∗1 . 4: if α′ ≤ O(α/d) then 5: z ← z′ 6: Repeat till α′ ≤ .
Algorithm 2 EstimateTanAlpha 1: Find t1 and t2 such that Pr[sgn(f(x))|x ∈ l(z, t′, )] at t1 is 0.4 and at t2 is 0.6. 2: Return t2−t1Φ−1(0.6)−Φ−1(0.4) .
The algorithm (Algorithm 1) estimates the angle of the current estimate with the true vector and then subsequently perturbs the vector to get closer after each successful iteration.
Theorem 5. Given a vector z ∈ Sd−1 such that it is 1/poly(d)-close to the underlying true vector w∗1 , that is cos
−1(zTw∗1) ≤ 1/poly(d), running RefineEstimate for O(T ) iterations outputs a vector z∗ ∈ Sd−1 such that cos−1((z∗)Tw∗1) ≤ ( 1− cd )T γ for some constant c > 0. Thus after O(d log(1/ )) iterations cos−1((z∗)Tw∗1) ≤ .
We prove the correctness of the algorithm by first showing that EstimateTanAlpha gives a multiplicative approximation to tan(α). The following lemma captures this property.
Lemma 3. EstimateTanAlpha(z) outputs y such that y = (1 ± O(η)) tan(α) where α is the angle between z and w∗1 .
Proof. We first show that the given probability when computed with sgn(xTw∗1−t) is a well defined function of the angle between the current estimate and the true parameter up to multiplicative error. Subsequently we show that the computed probability is close to the one we can estimate using f(x) since the current estimate is close to one direction. The following two lemmas capture these properties.
Lemma 4. For t, t′ and ≤ 1/t′, we have Pr[xTw∗1 ≥ t and x ∈ l(z, t′, )|x ∈ l(z, t, )] = Φc ( t− t∗ cos(α1) | sin(α1)| ) ±O( )t′
Lemma 5. For t′ ∈ [0, t/ cos(α1)], we have
Pr[sgn(f(x))|x ∈ l(z, t′, )] = Pr[sgn((w∗1)Tx− t)|x ∈ l(z, t, )] + de−Ω(t 2).
6for example with RELU activation, f will be c0 most of the time as other terms in P will never activate. So c0 can be set to say the median value of f .
Using the above, we can show that, t2 − t1 = ( Φ−1(0.6− η1 ±O( )t1)− Φ−1(0.4− η2 ±O( )t2) ) tan(α)
= ( Φ−1(0.6)− Φ−1(0.4)− (η1 ±O( )t1)(Φ−1)′(p1) + (η2 ±O( )t2)(Φ−1)′(p2) ) tan(α)
where η1, η2 > 0 are the noise due to estimating using f and p1 ∈ [0.6 − η1 ± O( )t1, 0.6] and p2 ∈ [0.4 − η2 ± O( )t2, 0.4] as long as t1, t2 ∈ [0, t/ cos(α1)]. The following lemma bounds the range of t1 and t2.
Lemma 6. We have 0 ≤ t1 ≤ t2 ≤ tcos(α1) .
Thus, we have, t2 − t1
Φ−1(0.6)− Φ−1(0.4) = (1±O (η1 + η2 + t2)) tan(α)
as long as η2+O( )t2 ≤ c for some constant c > 0. Thus, we can get a multiplicative approximation to tan(α) up to error η ( can be chosen to make its contribution smaller than η).
Finally we show (proof in Appendix ??) that with constant probability, a random perturbation reduces the angle by a factor of (1 − 1/d) of the current estimate hence the algorithm will halt after O(d log(1/ν)) iterations.
Lemma 7. By applying a random Gaussian perturbation along the d − 1 dimensional hyperplane normal to z with the distribution n(0,Θ(α/d))d−1 and scaling back to the unit sphere, with constant probability, the angle α (< π/2) with the fixed vector decreases by at least Ω(α/d).
3 SAMPLE COMPLEXITY
We extend the methods of the previous section to a broader class of polynomials but only to obtain results in terms of sample complexity. The main idea as in the previous section is to correlate with δ′(zTx−t) (the derivative of the dirac delta function) and find arg max||z||2=1 E[f(x)δ
′(zTx−t)]. We will show that the correlation goes to infinity when z is one of w∗i and bounded if it is far from all of them. From a practical standpoint we calculate δ′(zTx − s) by measuring correlation with 1 2 (δ(z
Tx− s+ )− δ(zTx− s− ). In the limit as → 0 this becomes δ′(zTx− s). δ(zTx− s) in turn is estimated using 1 (sgn(z
Tx− s+ )− sgn(zTx− s)), as in the previous section, for an even smaller ; however, for ease of exposition, in this section, we will assume that correlations with δ(zTx− s) can be measured exactly. Let us recall that f(x) = P (u((w∗1) Tx), u((w∗2) Tx), . . . , u((w∗d)
Tx)). Let C1(f, z, s) denote E[f(x)δ(zTx− s)] and let C2(f, z, s) denote E[f(x)(δ(zTx− s− )− δ(zTx− s+ )].
If u = sgn then P has degree at most 1 in each Xi. Let ∂P∂Xi denote the symbolic partial derivative of P with respect to Xi; so, it drops monomials without Xi and factors off Xi from the remaining ones. Let us separate dependence on Xi in P as follows:
P (X1, , .., Xd) = XiQi(X1, ..Xi−1, Xi+1, .., Xd) +R1(X1, .Xi−1, Xi+1, .., Xd)
then ∂P∂Xi = Qi.
We will overload the polynomial P such that P [x] to denote the polynomial computed by substituting Xi = u((w∗1)
Tx) and similarly for Q and R. Under this notation f(x) = P [x]. We will also assume that |P (X)| ≤ ||X||O(1) = ||X||c1 (say). By using simple correlations we will show: Theorem 6. If u is the sgn function, P (X) ≤ ||X||c1 and for all i, E[Qi[x]|(w∗i )Tx = t] ≥ 3 then using poly( d 3 2 ) samples one can determine the w ∗ i ’s within error 2. 7
Note that if all the w∗i ’s are orthogonal then Xi are independent and E [ Qi[x] ∣∣(w∗i )Tx = t] is just value ofQi evaluated by settingXi = 1 and setting all the the remainingXj = µwhere µ = E[Xj ]. This is same as 1/µ times the coefficient of Xi in P (µ(X1 + 1), . . . , µ(Xd + 1)).
7The theorem can be extended to ReLU by correlating with the second derivative δ′′ (see Appendix C.1).
Corollary 1. If u is the sgn function and w∗i s are orthogonal then in sample complexity poly( d 3 2
) one can determine W∗ within error 2 in each entry, if the coefficient of the linear terms in P (µ(X1 + 1), µ(X2 + 1), µ(X3 + 1), ..) is larger than 3µ, where µ = E[Xi].
The main point behind the proof of Theorem 6 is that the correlation is high when z is along one of w∗i and negligible if it is not close to any of them.
Lemma 8. Assuming P (X) < ||X||c1 . If z = w∗i then C2(f, z, t) = φ(t)E [ ∂P ∂Xi ∣∣∣zTx = t] + dO(1). Otherwise if all angles αi between z and w∗i are at least 2 it is at most d O(1)/ 2.
We will use the notation g(x)x=s to denote g(x) evaluated at x = s. Thus Cauchy’s mean value theorem can be stated as g(x + ) − g(x) = [g′(s)](s = s′ ∈ [x, x + ]). We will over load the notation a bit: φ(zTx = s) will denote the probability density that vzTx = s; so if z is a unit vector this is just φ(s); φ(zT1 x = s1, z T 2 x = s2) denotes the probability density that both zT1 x = s1, z T 2 x = s2; so again if z1, z2 are orthonormal then this is just φ(s1)φ(s2).
The following claim interprets correlation with δ(zTx − s) as the expected value along the corresponding plane zTx = s. Claim 3. E[f(x)δ(zTx− s)] = E[f(x)|zTx = s]φ(zTx = s).
The following claim computes the correlation of P with δ′(zTx− s). Claim 4. E[P [x]δ′(zTx = s)] is equal to ∑ i | cot(αi)|φ(zTx = s, (w∗i )Tx = t)
E [ ∂P ∂Xi [x]|zTx = s, (w∗i )Tx = t ] + φ′(s)E[P [x]|zTx = s].
We use this to show that the correlation is bounded if all the angles are lower bounded. Claim 5. If P (X) ≤ ||X||c1 and if z has an angle of at least 2 with all the w∗i ’s then C2(f, z, s) ≤ dO(1)/ 2.
Above claims can be used to prove main Lemma 8. Refer to the Appendix C for proofs.
Proof of Theorem 6. If we wish to determine w∗i within an angle of accuracy 2 let us set to be O( 3 2φ(t)d
−c). From Lemma 8, for some large enough c, this will ensure that if all αi > 2 the correlation is o(φ(t) 3). Otherwise it is φ(t) 3(1±o(1)). Since φ(t) = poly(1/d), given poly( d 2 3 ) samples, we can test if a given direction is within accuracy 2 of a w∗i or not.
4 STRONGER RESULTS UNDER STRUCTURAL ASSUMPTIONS
Under additional structural assumptions on W∗ such as the weights being binary, that is, in {0, 1}, sparsity or certain restrictions on activation functions, we can give stronger recovery guarantees. Proofs have been deferred to Appendix D.
Theorem 7. For activation ut(a) = eρ(a−t). Let the weight vectors w∗i be 0, 1 vectors that select the coordinates of x. For each i, there are exactly d indices j such that wij = 1 and the coefficient of the linear terms in P (µ(X1 + 1), µ(X2 + 1), µ(X3 + 1), ..) for µ = e−ρt is larger than the coefficient of all the product terms (constant factor gap) then we can learn the W∗.
In order to prove the above, we will construct a correlation graph over x1, . . . , xn and subsequently identify cliques in the graph to recover w∗i ’s.
With no threshold, recovery is still possible for disjoint, low l1-norm vector. The proof uses simple correlations and shows that the optimization landscape for maximizing these correlations has local maximas being w∗i ’s. Theorem 8. For activation u(a) = ea. If all w∗i ∈ {0, 1}n are disjoint, then we can learn w∗i as long as P has all positive coefficients and product terms have degree at most 1 in each variable.
For even activations, it is possible to recover the weight vectors even when the threshold is 0. The technique used is the PCA like optimization using hermite polynomials as in Section 2. Denote C(S, µ) = ∑ S⊆S′⊆[n] cS′µ |S′|.
Theorem 9. If the activation is even and for every i, j: C({i}, û0) + C({j}, û0) > 6û22 û0û4 C({i, j}, û0) then there exists an algorithm that can recover the underlying weight vectors.
5 CONCLUSION
In this work we show how activations in a deep network that have a high threshold make it easier to learn the lowest layer of the network. We show that for a large class of functions that represent the upper layers, the lowest layer can be learned with high precision. Even if the threshold is low we show that the sample complexity is polynomially bounded. An interesting open direction is to apply these methods to learn all layers recursively. It would also be interesting to obtain stronger results if the high thresholds are only present at a higher layer based on the intuition we discussed.
A PREREQUISITES
A.1 HERMITE POLYNOMIALS
Hermite polynomials form a complete orthogonal basis for the gaussian distribution with unit variance. For more details refer to Wikipedia contributors (2018). Let hi be the normalized hermite polynomials. They satisfy the following,
Fact 0. E[hn(x)] = 0 for n > 0 and E[h0(x)] = 1.
Fact 1. Ea∼N(0,1)[hi(a)hj(a)] = δij where δij = 1 iff i = j.
This can be extended to the following:
Fact 2. For a, b with marginal distribution N(0, 1) and correlation ρ, E[hi(a)hj(b)] = δijρj .
Consider the following expansion of u into the hermite basis (hi),
u(a) = ∞∑ i=0 ûihi(a).
Lemma 9. For unit norm vectors u, v, E[u(vTx)hj(wTx)] = ûj(vTw)j .
Proof. Observe that vTx and wTx have marginal distribution N(0, 1) and correlation vTw. Thus using Fact 2,
E[u(vTx)hj(wTx)] = ∞∑ i=1 ûiE[hi(vTx)hj(wTx)] = ∞∑ i=1 ûiδij(v Tw)j = ûj(v Tw)j .
For gaussians with mean 0 and variance σ2 define weighted hermite polynomials Hσl (a) = |σ|lhl(a/σ). Given input vTx for x ∼ N(0, I), we suppress the superscript σ = ||v||. Corollary 2. For a non-zero vector v (not necessarily unit norm) and a unit norm vector w, E[Hi(vTx)hj(wTx)] = δij(vTw)j .
Proof. It follows as the proof of the previous lemma,
E[u(vTx)hj(wTx)] = ∞∑ i=1 ûiE[hi(vTx)hj(wTx)] = ∞∑ i=1 ûiδij(v Tw)j = ûj(v Tw)j .
Fact 3. hn(x+ y) = 2− n 2 ∑n k=0 ( n k ) hn−k(x √ 2)hk(y √ 2). Fact 4. hn(γx) = ∑bn2 c k=0 γ n−2k(γ2 − 1)k ( n 2k ) (2k)! k! 2 −khn−2k(x).
Fact 5. α(n,m, γ) = E[hm(x)hn(γx)] = γn−2k(γ2 − 1)k ( n 2k ) (2k)! k! 2 −k for k = n−m2 if k ∈ Z + else 0.
A.2 PROPERTIES OF MATRICES
Consider matrix A ∈ Rm×m. Let σi(A) to be the ith singular value of A such that σ1(A) ≥ σ2(A) ≥ . . . ≥ σm(A) and set κ(A) = σ1(A)/σm(A).
Fact 6. |det(A)| = ∏m i=1 σi(A).
Fact 7. Let B be a (mk)× (mk) principal submatrix of A, then κ(B) ≤ κ(A).
A.3 ACTIVATION FUNCTIONS
Lemma 10. For u being a high threshold ReLU, that is, ut(a) = max(0, a− t) we have for t ≥ C for large enough constant C > 0, Eg∼N(0,σ2)[ut(g)] ≤ e− t2 2σ2 . Also, û4, û2 = tΘ(1)e− t2 2 .
Proof. We have
Eg∼N(0,σ2)[ut(g)] = 1√ 2πσ ∫ ∞ −∞ max(0, g − t)e− g2 2σ2 dg
= 1√ 2πσ ∫ ∞ t (g − t)e− g2 2σ2 dg
≤ 1√ 2πσ ∫ ∞ t ge− g2 2σ2 dg
= σ√ 2π ∫ ∞ t2
2σ2
e−hdh
= σ√ 2π e− t2 2σ2 .
Also,
û4 = Eg∼N(0,1)[ut(g)h4(g)]
= 1√ 2π ∫ ∞ −∞ max(0, g − t)(g4 − 6g2 + 3)e− g2 2 dg
= 1√ 2π ∫ ∞ t (g − t)(g4 − 6g2 + 3)e− g2 2 dg
≥ 1√ 2π (t4 − 6t2)1 t e− t2 2 −1− 1 2t2
≥ Ω ( t3e− t2 2 ) .
To upper bound,
û4 = 1√ 2π ∫ ∞ −∞ max(0, g − t)(g4 − 6g2 + 3)e− g2 2 dg
= 1√ 2π ∫ ∞ t (g − t)(g4 − 6g2 + 3)e− g2 2 dg
≤ 1√ 2π ∫ ∞ t 2g5e− g2 2 dg
= 1√ 2π ∫ ∞ t2
2
h2e−hdh
= O ( t4e− t2 2 ) .
Similar analysis holds for û2.
Observe that sgn can be bounded very similarly replacing g− t by 1 which can affect the bounds up to only a polynomial in t factor.
Lemma 11. For u being a high threshold sgn, that is, ut(a) = sgn(a − t) we have for t ≥ C for large enough constant C > 0, Eg∼N(0,σ2)[ut(g)] ≤ e− t2 2σ2 . Also, û4, û2 = tΘ(1)e− t2 2 .
For sigmoid, the dependence varies as follows:
Lemma 12. For u being a high threshold sigmoid, that is, ut(a) = 11+e−(a−t) we have for t ≥ C for large enough constant C > 0, Eg∼N(0,σ2)[ut(g)] ≤ e−t+ σ2 2 . Also, û4, û2 = Θ(e−t).
Proof. We have
Eg∼N(0,σ2)[ut(g)] = 1√ 2πσ ∫ ∞ −∞
1
1 + e−(g−t) e−
g2 2σ2 dg
= e−t√ 2πσ ∫ ∞ −∞
1
e−t + e−g e−
g2 2σ2 dg
≤ e −t
√ 2πσ ∫ ∞ −∞ ege− g2 2σ2 dg
= e−te
σ2
2
√ 2πσ ∫ ∞ −∞ e− (g−σ2)2 2σ2 dg
= e−te σ2 2
Also,
û4 = Eg∼N(0,1)[ut(g)h4(g)]
= 1√ 2π ∫ ∞ −∞
1
1 + e−(g−t) e−
g2
2 dg
= e−t√
2π ∫ ∞ −∞
1
e−t + e−g (g4 − 6g2 + 3)e−
g2
2 dg
≥ e −t √
2π ∫ ∞ 0
1
e−t + e−g (g4 − 6g2 + 3)e−
g2
2 dg
≥ e −t √
2π ∫ ∞ 0 1 2 (g4 − 6g2 + 3)e− g2 2 dg
= Ω(e−t).
We can upper bound similarly and bound û2.
B APPROXIMATE RECOVERY WITH LINEAR TERMS
B.1 CONSTRAINED OPTIMIZATION VIEW OF LANDSCAPE DESIGN
Let us consider the linear case with w∗i ’s are orthonormal. Consider the following maximization problem for even l ≥ 4,
max z∈Sn−1
sgn(ûl) · E [ f(x) ·Hl ( zTx )] where hl is the lth hermite polynomial. Then we have,
sgn(ûl) · E [ f(x) · hl ( zTx )] = sgn(ûl) · E [( k∑ i=1 ciut((w ∗ i ) Tx) ) · hl ( zTx )]
= sgn(ûl) · k∑ i=1 ciE [ ut((w ∗ i ) Tx) · hl ( zTx )] = |ûl|
k∑ i=1 ci((w ∗ i ) Tz)l.
It is easy to see that for z ∈ Sn−1, the above is maximized at exactly one of the wi’s (up to sign flip for even l) for l ≥ 3 as long as ul 6= 0. Thus, each wi is a local minima of the above problem.
Let L(z) = − ∑k i=1 ciz l i. For constraint ||z||2 = 1, we have the following optimality conditions (see Nocedal & Wright (2006) for more details).
First order:
∇L(z)− z T∇L(z) ||z||2 z = 0 and ||z||2 = 1.
This applied to our function gives us that for λ = − ∑ i ciz l i
||z||2 (λ < 0),
−lcizl−1i − 2λzi = 0
The above implies that either zi = 0 or zl−2i = − λlci with ||z||2 = 1. For this to hold z is such that for some set S ⊆ [n], |S| > 1, only i ∈ S have zi 6= 0 and ∑ i∈S z 2 i = 1. This implies that for all i ∈ S, zl−2i = − 2λlci .
Second order:
For all w 6= 0 such that wTz = 0,wT (∇2L(z)− 2λI)w ≥ 0. For our function, we have:
∇2L(z) = −l(l − 1)diag(c · z)l−2 =⇒ (∇2L(z))ij = {
2(l − 1)λ if i = j and i ∈ S 0 otherwise.
The last follows from using the first order condition. For the second order condition to be satisfied we will show that |S| = 1. Suppose |S| > 2, then choosing w such that wi = 0 for i 6∈ S and such that wTz = 0 (it is possible to choose such a value since |S| > 2), we get wT (∇2L(z)− 2λI)w = 2(l − 2)λ||w||2 which is negative since λ < 0, thus these cannot be global minima. However, for |S| = 1, we cannot have such a w, since to satisfy wTz = 0, we need wi = 0 for all i ∈ S, this gives us wT (∇2L(z)− 2λI)w = −2λ||w||2 which is always positive. Thus z = ±ei are the only local minimas of this problem.
B.2 IMPORTANT RESULTS FROM GE ET AL. (2017) Lemma 13 (Ge et al. (2017)). If z is an ( , τ)-local minima of F (z) = − ∑ i αiz 4 i +λ( ∑ i z 2 i −1)2
for ≤ √ τ3/αmin where αmin = mini αi, then
• (Lemma 5.2) |z|2nd ≤ √
τ αmin where |z|2nd denotes the magnitude of the second largest entry in terms of magnitude of z.
• (Derived from Proposition 5.7) zmax = ±1± O(dτ/αmin)± O( /λ) where |z|max is the value of the largest entry in terms of magnitude of z.
B.3 OMITTED PROOFS FOR ONE-BY-ONE RECOVERY
Proof of Lemma 1. Let O ∈ Rd×d be the orthonormal basis (row-wise) of the subspace spanned by w∗i for all i ∈ [d] generated using Gram-schmidt (with the procedure done in order with elements of |S| first). Now let OS ∈ R|S|×d be the matrix corresponding to the first S rows and let O⊥S ∈ R(d−|S|)×n be that corresponding to the remaining rows. Note that OW∗ (W∗ also has the same ordering) is an upper triangular matrix under this construction.
E ∏ j∈S ut((w ∗ j ) Tx) = 1
(2π)n/2 ∫ x ∏ i∈S ut(x Tw∗i )e − ||x|| 2 2 dx
= 1
(2π)n/2 ∫ x ∏ i∈S ut((OSw ∗ i ) TOSx)e − ||OSx|| 2+||O⊥S x|| 2 2 dx
=
( 1
(2π) |S| 2 ∫ x′∈R|S| ∏ i∈S ut((OSw ∗ i ) Tx′)e− ||x′||2 2 dx′ )( 1 (2π) d−|S| 2 ∫ x′∈Rd−|S| e− ||x′||2 2 dx′ )
= 1
(2π) |S| 2 ∫ x′∈R|S| ∏ i∈S ut((OSw ∗ i ) Tx′)e− ||x′||2 2 dx′
= |det(OSW∗S)|−1
(2π) |S| 2
∫ b∈R|S| ∏ i∈S ut(bi)e − ||(OSW ∗ S) −T b||2 2 db
Now observe that OSW∗S is also an upper triangular matrix since it is a principal sub-matrix of OW∗. Thus using Fact 6 and 7, we get the last equality. Also, the single non-zero entry row has non-zero entry being 1 (||w∗i || = 1 for all i). This gives us that the inverse will also have the single non-zero entry row has non-zero entry being 1. WLOG assume index 1 corresponds to this row. Thus we can split this as following
E ∏ j∈S ut((w ∗ j ) Tx) ≤ |det(OSW∗S)|−1 ( 1√ 2π ∫ b1 ut(b1)e − b 2 1 2 db1 ) ∏ i∈S\{1} 1√ 2π ∫ bi ut(bi)e − b 2 i 2||OSW ∗ S ||2 dbi
≤ |det(OSW∗S)|−1 ( 1√ 2π ∫ b1 ut(b1)e − b 2 1 2 db1 ) ∏ i∈S\{1} 1√ 2π ∫ bi ut(bi)e − b 2 i ||W∗||2 dbi
≤ ρ(t, 1) (κ(W∗)ρ(t, ||W∗||))|S|−1
Proof of Claim 1. Consider the SVD of matrix M = UDUT . Let W = UD−1/2 and yi =√ ciW Tw∗i for all i. It is easy to see that yi are orthogonal. Let F (z) = G(Wz):
F (z) = |û4| ∑ i ci(z TWTw∗i ) 4 − λû22 (∑ i ci(z TWTw∗i ) 2 − 1 )2
= |û4| ∑ i 1 ci (zTyi) 4 − λû22 (∑ i (zTyi) 2 − 1 )2 .
Since yi are orthogonal, for means of analysis, we can assume that yi = ei, thus the formulation reduces to maxz |û4| ∑ i 1 ci (zi) 4 − λ′ ( ||z||2 − 1 )2 up to scaling of λ′ = λû22. Note that this is of the form in Lemma 13 hence using that we can show that the approximate local minimas of F (z) are close to yi and thus the local maximas of G(z) are close to Wyi = √ ciWW Tw∗i = √ ciM
−1w∗i due to the linear transformation. This can alternately be viewed as the columns of (TW∗)−1 since TW∗M−1(TW∗)T = I.
Proof of Theorem 4. Let Z be an ( , τ)-local minimum of A, then we have ||∇A(Z)|| ≤ and λmin(∇2A(Z)) ≥ −τ . Observe that
||∇B(Z)|| = ||∇(A+ (B −A)(Z)|| ≤ ||∇A(Z)||+ ||∇(B −A)(Z)|| ≤ + ρ.
Also observe that
λmin(∇2B(Z)) = λmin(∇2(A+ (B −A))(Z)) ≥ λmin(∇2A(Z)) + λmin(∇2(B −A)(Z)) ≥ −τ − ||∇2(B −A)(Z)|| ≥ −τ − γ
Here we use |λmin(M)| ≤ ||M|| for any symmetric matrix. To prove this, we have ||M|| = maxx∈Sn−1 ||Mx||. We have x = ∑ i xivi where vi are the eigenvectors. Thus we have Mx =∑
i xiλi(M)vi and ∑ x2i = 1. Which gives us that ||M|| = √∑ i x 2 iλ 2 i (M) ≥ |λmin(M)|.
Proof of Lemma 2. Expanding f , we have
E[|∆(x)|] = E ∣∣∣∣∣∣ ∑
S⊆[d]:|S|>1 cS ∏ j∈S ut((w ∗ j ) Tx) ∣∣∣∣∣∣
≤ ∑
S⊆[d]:|S|>1
|cS |E ∏ j∈S ut((w ∗ j ) Tx) using Lemma 1 ≤ C
∑ S⊆[d]:|S|>1 ρ(t, 1) ( 1 σmin(W∗) ρ(t, ||W∗||) )|S|−1
= C d∑ i=1 ( d i ) ρ(t, 1) ( 1 σmin(W∗) ρ(t, ||W∗||) )i−1
using ( d
i
) ≤ di ≤ C d∑ i=1 dρ(t, 1) ( d σmin(W∗) ρ(t, ||W∗||) )i−1 using assumption on t ≤ Cd2ρ(t, 1) ( d
σmin(W∗) ρ(t, ||W∗||)
)
Lemma 14. For any function L such that ||L(z,x)|| ≤ C(z)||x||O(1) where C is a function that is not dependent on x, we have ||E[∆(x)L(x)]|| ≤ C(z)d−(1+p)η+3O(log d).
Proof. We have
||E[∆(x)L(x)]|| ≤ E[|∆(x)||L(x)||] ≤ E[|∆(x)C(z)||x||O(1)]
= C(z) ( E[|∆(x)| ||x||O(1)| ||x|| ≥ c]Pr[||x|| ≥ c]
+ E[|∆(x)| ||x||O(1)| ||x|| < c]Pr[||x|| < c] )
≤ C(z)(E[||x||O(1)|||x|| ≥ c]Pr[||x|| ≥ c] + cE[|∆(x)|])
= C(z)(cO(1)e− c2 2 + cO(1)E[|∆(x)|]).
Now using Lemma 2 to bound E[|∆(x)|], for c = Θ( √ η log d we get the required result.
Lemma 15. For ||z|| = Ω(1) and λ = Θ(|û4|/û22) ≈ dη , ||∇G(z)|| ≥ Ω(1)d−η .
Proof. Let K = κ(W∗) which by assumption is θ(1). We will argue that local minima of G cannot have z with large norm. First lets argue this for Glin(z). We know that Glin(z) = −α ∑ (zTw∗i )
4 + λβ2(( ∑ (zTw∗i )
2)− 1)2 where α = |û4| and β = û2. We will argue that zT∇Glin(z) is large if z is large.
zT∇Glin(z) = −4α ∑ (zTw∗i ) 3(zTw∗i ) + 2λβ 2 (∑ (zTw∗i ) 2 − 1 )(∑ 2(zTw∗i )(z Tw∗i ) )
= −4α ∑
(zTw∗i ) 4 + 4λβ2 (∑ (zTw∗i ) 2 − 1 )(∑ (zTw∗i ) 2 )
Let y = W∗z then K||z|| ≥ ||y|| ≥ ||z||/K since K is the condition number of W∗. Then this implies
zT∇Glin(z) = −4α ∑ y4i + 4λβ 2(||y||2 − 1)||y||2
= 4||y||2((−α+ λβ2)||y||2 + λβ2) ≥ ||y||4(−α+ λβ2) ≥ Ω(1)d−η||y||4
Since ||y|| ≥ ||z||/K = Ω(1) by assumptions on λ, z we have zT∇Glin(z) ≥ Ω(λβ2||y||4) = Ω(1)d−η||z||4. This implies ||∇Glin(z)|| = Ω(1)d−η||z||3. Now we need to argue for G. G(z)−Glin(z) = −sgn(û4)E[(flin(x) + ∆(x))H4(zTx)] + λ(E[(flin(x) + ∆(x))H2(zTx)]− β)2
+ sgn(û4)E[(flin(x))H4(zTx)]− λE[(flin(x))H2(zTx)]− β]2
= −sgn(û4)E[∆(x)H4(zTx)] + λE[∆(x)H2(zTx)]2 + 2λE[∆(x)H2(zTx)]E[flin(x)H2(zTx)− β] = −sgn(û4)||z||4E[∆(x)h4(zTx/||z||)] + λ||z||4E[∆(x)h2(zTx/||z||)]2
+ 2λ||z||4E[∆(x)h2(zTx/||z||)]E[flin(x)h2(zTx/||z||)]− 2λβ||z||2E[∆(x)h2(zTx/||z||)] Now h4(zTx/||z||) doesn’t have a gradient in the direction of z so zT∇h4(zTx/||z||) = 0. Similarly zT∇h2(zTx/||z||) = 0. So zT∇(G(z)−Glin(z)) = −4sgn(û4)||z||4E[∆(x)h4(zTx/||z||)] + 4λ||z||4(E[∆(x)h2(zTx/||z||)])2
+ 8λ||z||4E[∆(x)h2(zTx/||z||)]E[flin(x)h2(zTx/||z||)]− 4λβ||z||2E[∆(x)h2(zTx/||z||)] We know that E[flin(x)h2(zTx/||z||)] has a factor of β giving us using Lemma 14:
|zT∇(G(z)−Glin(z))| ≤ O(log d)d−(1+p)η+3||z||4.
So zT∇G(z) is also Ω(||z||4). so ||∇G(z)|| ≥ Ω(1)d−η
Proof of Claim 2. We have G−Glin as follows, G(z)−Glin(z) = −sgn(û4)E[(flin(x) + ∆(x))H4(zTx)] + λ(E[(flin(x) + ∆(x))H2(zTx)]− û2)2
+ sgn(û4)E[(flin(x))H4(zTx)]− λ(E[(flin(x))H2(zTx)]− û2)2
= −sgn(û4)E[∆(x)H4(zTx)] + λ(E[∆(x)H2(zTx)])2
+ 2λE[∆(x)H2(zTx)]E[flin(x)H2(zTx)− û2] Thus we have,
∇(G(z)−Glin(z)) = −sgn(û4)E[∆(x)∇H4(zTx)] + 2λE[∆(x)H2(zTx)]E[∆(x)∇H2(zTx)]
+ 2λE[flin(x)H2(zTx)− û2]E[∆(x)∇H2(zTx)] + 2λE[∆(x)H2(zTx)]E[flin(x)∇H2(zTx)]
Observe that H2 and H4 are degree 2 and 4 (respectively) polynomials thus norm of gradient and hessian of the same can be bounded by at most O(||z||||x||4). Using Lemma 14 we can bound each term by roughly O(log d)d−(1+p)η+3||z||4. Note that λ being large does not hurt as it is scaled appropriately in each term. Subsequently, using Lemma 15, we can show that ||z|| is bounded by a constant since ||G(z)|| ≤ d−2η . Similar analysis holds for the hessian too.
Now applying Theorem 4 gives us that z is an (O(log d)d−(1+p)η+3, O(log d)d−(1+p)η+3)approximate local minima of Glin. This implies that it is also an ( ′ := C log(d)d−(1+2p)η+3, τ ′ := C log(d)d−(1+2p/3)η+3)-approximate local minima of Glin for large enough C > 0 by increasing τ . Observe that √ τ3/|û4| = C3/2 log3/2(d)d−(3/2+p)η+9/2/d−η/2 = C3/2 log3/2(d)d−(1+p)η+9/2 ≥ ′. Now using Claim 1, we get the required result.
B.4 SIMULTANEOUS RECOVERY
Ge et al. (2017) also showed simultaneous recovery by minimizing the following loss function Glin defined below has a well-behaved landscape.
Glin(W) = E flin(x) ∑ j,k∈[d],j 6=k ψ(wj ,wk,x) − γE flin(x) ∑ j∈[d] H4(w T j x) (1)
+ λ ∑ i ( E [ flin(x)H2(w T i x) ] − û2 )2 (2)
where ψ(v, w,x) = H2(vTx)H2(wTx) + 2(vTw)2 + 4(vTx)(wTx)vTw.
They gave the following result.
Theorem 10 (Ge et al. (2017)). Let c be a sufficiently small universal constant (e.g. c = 0.01 suffices), and suppose the activation function u satisfies û4 6= 0. Assume γ ≤ c, λ ≥ Ω(|û4|/û22), and W∗ be the true weight matrix. The function Glin satisfies the following:
1. Any saddle point W has a strictly negative curvature in the sense that λmin(∇2Glin(W)) ≥ −τ0 where τ0 = cmin{γ|û4|/d, λû22}.
2. Suppose W is an ( , τ0)-approximate local minimum, then W can be written as W−T = PDW∗ + E where D is a diagonal matrix with Dii ∈ {±1±O(γ|û4|/λû22)±O( /λ)}, P is a permutation matrix, and the error term ||E|| ≤ O( d/û4).
We show that this minimization is robust. Let us consider the corresponding function G to Glin with the additional non-linear terms as follows:
G(W) = E f(x) ∑ j,k∈[d],j 6=k ψ(wj ,wd,x) − γE f(x) ∑ j∈[d] H4(wj ,x) + λ
∑ i (E [f(x)H2(wi,x)]− û2)2
Now we can show that G and Glin are close as in the one-by-one case.
R(W) := G(W)−Glin(W) = E [∆(x)A(W,x)]− γE [∆(x)B(W,x)] + λ ( E [f(x)C(W,x)]2 − E [flin(x)C(W,x)]2 ) = E [∆(x)A(W,x)]− γE [∆(x)B(W,x)] + λE [(∆(x)C(W,x)(f(x′) + flin(x′))C(W,x′)] = E [∆(x)A(W,x)]− γE [∆(x)B(W,x)] + λE [(∆(x)D(W,x)] = E [∆(x)(A(W,x)− γB(W,x) + λD(W,x))] = E [∆(x)L(W,x)]
where A(W,x) = ∑ j,k∈[d],j 6=k ψ(wj ,wd,x), B(W,x) = ∑ j∈[d]H4(wj ,x), C(W,x) =∑
iH2(wi,x), D(W,x) = C(W,x)E[(f(x′)+flin(x′))C(W,x′)] and L(W,x) = A(W,x)− γB(W,x) + λD(W,x).
Using similar analysis as the one-by-one case, we can show the required closeness. It is easy to see that ||∇L|| and ||∇2L|| will be bounded above by a constant degree polynomial in O(log d)d−(1+p)η+3 max ||wi||4. No row can have large weight as if any row is large, then looking at the gradient for that row, it reduces to the one-by-one case, and there it can not be larger than a constant. Thus we have the same closeness as in the one-by-one case. Combining this with Theorem 10 and 4, we have the following theorem:
Theorem 11. Let c be a sufficiently small universal constant (e.g. c = 0.01 suffices), and under Assumptions 1, 2 and 3. Assume γ ≤ c, λ = Θ(dη), and W∗ be the true weight matrix. The function G satisfies the following
1. Any saddle point W has a strictly negative curvature in the sense that λmin(∇2Glin(W)) ≥ −τ where τ0 = O(log d)d−Ω(1).
2. Suppose W is a (d−Ω(1), d−Ω(1))-approximate local minimum, then W can be written as W−T = PDW∗ + E where D is a diagonal matrix with Dii ∈ {±1±O(γ)± d−Ω(1))}, P is a permutation matrix, and the error term ||E|| ≤ O(log d)d−Ω(1).
Using standard optimization techniques we can find a local minima.
B.5 APPROXIMATE TO ARBITRARY CLOSE
Lemma 16. If u is the sign function then E[u(wTx)δ′(zTx)] = c| cot(α)| where w, z are unit vectors and α is the angle between them and c is some constant.
Proof. WLOG we can work the in the plane spanned by z and w and assume that z is the vector i along and w = i cosα + j sinα. Thus we can replace the vector x by ix + jy where x, y are normally distributed scalars. Also note that u′ = δ (Dirac delta function).
E[u(wTx)δ′(zTx)] = E[u(x cosα+ y sinα)δ′(x)]
= ∫ y ∫ x u(x cosα+ y sinα)δ′(x)φ(x)φ(y)dxdy
Using the fact that ∫ x δ′(x)h(x)dx = h′(0) this becomes
= ∫ y φ(y)[(∂/∂x)u(x cosα+ y sinα)φ(x)]x=0dy
= ∫ y φ(y)[n(x)u′(x cosα+ y sinα) cosα+ φ′(x)u(x cosα+ y sinα)]x=0dy
= ∫ ∞ y=−∞ φ(y)φ(0)δ(y sinα) cosαdy
Substituting s = y sinα this becomes
= ∫ ∞/ sinα s=−∞/ sinα φ(s/ sinα)φ(0)δ(s) cosα(1/ sinα)ds
=sgn(sinα) cot(α)φ(0) ∫ s φ(s/ sinα)δ(s)ds =| cot(α)|φ(0)φ(0)
Proof of Lemma 4. Let us compute the probability of lying in the -band for any t:
Pr[x ∈ l(z, t, )] = Pr[t− ≤ zTx ≤ t] = Pr g∈N(0,||z||2) [t− ≤ g ≤ t]
= 1√
2π||z|| ∫ t g=t− e − g 2 2||z||2 dg = √ 2π||z|| e − t̄2 2||z||2
where the last equality follows from the mean-value theorem for some t̄ ∈ [t− , t]. Next we compute the following:
Pr[xTw∗1 ≥ t and x ∈ l(z, t′, )]
= 1
(2π) n 2 ∫ x sgn(x1 − t)1[x ∈ l(z, t′, )]e− ||x||2 2 dx
= 1
(2π) 1 2 ∫ ∞ x1=t e− x21 2 ( 1 (2π) n−1 2 ∫ x−1 1[x−1 ∈ l(z−1, t′ − z1x1, )]e− ||x−1|| 2 2 dx−1 ) dx1
= 1
(2π) 1 2 ∫ ∞ x1=t e− x21 2 Pr[x−1 ∈ l(z−1, t− z1x1, )]dx−1
=
2π||z−1|| ∫ t′ g=t′− ∫ ∞ x1=t e− x21 2 e − (g−z1x1) 2 2||z−1||2 dx1dg
= 1
2π||z−1|| ∫ t′ g=t′− e − g 2 2||z||2 ∫ ∞ x1=t e −
( x1−
gz1 ||z||2 )2 2 ||z−1||2
||z||2 dx1dg
= 1√
2π||z|| ∫ t′ g=t′− e − g 2 2||z||2 Φc ( t||z||2 − gz1) ||z−1| |||z|| ) dg
= √ 2π e−
t∗2 2 Φc ( t− t ∗ cos(α1) | sin(α1)| ) where the last equality follows from the mean-value theorem for some t∗ ∈ [t′ − , t′]. Combining, we get:
Pr[xTw1∗ ≥ t and x ∈ l(z, t′, )|x ∈ l(z, t, )]
= e− t∗2−t̄2 2 Φc ( t− t∗ cos(α1) | sin(α1)| ) = Φc ( t− t∗ cos(α1) | sin(α1)| ) ±O( )t′
for ≤ 1/t′.
Proof of Lemma 5. Recall that P is monotone with positive linear term, thus for high threshold u (0 unless input exceeds t and positive after) we have sgn(f(x)) = ∨sgn(xTw∗i − t). This is because, for any i, P applied to Xi > 0 and ∀j 6= i,Xj = 0 gives us ci which is positive. Also, P (0) = 0. Thus, sgn(P ) is 1 if any of the inputs are positive. Using this, we have,
Pr[sgn(f(x))|x ∈ l(z, t′, )] ≥ Pr[sgn((w∗1)Tx− t)|x ∈ l(z, t′, )] Also,
Pr[sgn(f(x))|x ∈ l(z, t′, )] ≤ ∑
Pr[sgn(xTw∗i − t)|x ∈ l(z, t′, )]
= Pr[sgn((w∗1) Tx− t)|x ∈ l(z, t′, )] + ∑ i 6=1 Pr[sgn(xTw∗i − t)|x ∈ l(z, t′, )]
≤ Pr[sgn((w∗1)Tx− t)|x ∈ l(z, t, )] + η where ∑ i 6=1 Pr[sgn(x
Tw∗i − t)|x ∈ l(z, t′, )] ≤ η. We will show that η is not large since a z is close to one of the vectors, it can not be close to the others thus αi will be large for all i 6= j. Let us bound η,∑
i6=1 Pr[sgn(xTw∗i − t)|x ∈ l(z, t′, )] ≤ ∑ i 6=1 ( Φc ( t− t∗i cos(αi) | sin(αi)| ) +O( )t′i )
≤ ∑ i6=1 ( Φc ( t− t∗i cos(αi) | sin(αi)| ) +O( )t′ )
≤ ∑ i6=1 ( Φc ( t− t′ cos(αi) | sin(αi)| ) +O( )t′ ) ≤ ∑ i6=1 1√ 2πγi e− γ2i 2 +O( )kt′
where γi = t−t′ cos(αi) | sin(αi)| . The above follows since γi ≥ 0 by assumption on t ′. Under the assumption, let β = maxi 6=1 cos(αi) we have
γi ≥ t ( 1− βcos(α1) )
√ 1− β2
= Ω(t)
under our setting. Thus we have,∑ i 6=1 Pr[sgn(xTw∗i − t))|x ∈ l(z, t′, )] ≤ de−Ω(t 2) +O( )dt = de−Ω(t 2)
for small enough . | 1. What is the focus of the paper regarding neural networks and their recovery guarantees?
2. What are the concerns regarding the assumptions and claims made in the paper?
3. Do you have any questions about the correctness and writing style of the paper? | Review | Review
This paper gives provable recovery guarantees for a class of neural networks which have high-threshold activation in the first layer, followed by a "well-behaved" polynomial, under Gaussian input. The algorithm is based on the approach by Ge et al. (2017), as well as an iterative refinement method.
While this could be an interesting result, I have several concerns regarding the assumptions, correctness, and writing.
1) It is required that the threshold is at least sqrt{log d} (Thm. 1), where d is the number of hidden neurons in the first layer. It seems that this essentially zeros out almost all the neurons, since the maximum among d Gaussian random variables is roughly sqrt{log d}. The authors should explain what exactly this model is doing, i.e., what kind of functions it can compute, in order to justify why this is an interesting model.
Furthermore, the authors claim that the studied model is a "deep" neural network, but I disagree. As I understand, the difference between this model and two-layer networks is that the second layer here is a polynomial instead of a linear function. This doesn't make it a deep network since the (polynomial) part above the first layer is not modeled in a layer-wise fashion, not to mention that under the setting considered in the paper the polynomial behaves similar to a linear function.
2) It is stated at the end of Section 2 that the angle can be reduced by a factor of 1-1/d ***with constant probability***. How does this ensure you can succeed after O(d log(1/nu)) iterations? As far as I see you need the success probability in one iteration to be at least something like 1-1/Omega(d) so that you can apply a union bound.
3) Even if the issues of motivation and correctness are clarified, I find it very difficult to understand the overall intuition and main technical contributions in this paper. The writing needs to be significantly improved to reach the level of a top conference. |
ICLR | Title
Recovering the Lowest Layer of Deep Networks with High Threshold Activations
Abstract
Giving provable guarantees for learning neural networks is a core challenge of machine learning theory. Most prior work gives parameter recovery guarantees for one hidden layer networks, however, the networks used in practice have multiple non-linear layers. In this work, we show how we can strengthen such results to deeper networks – we address the problem of uncovering the lowest layer in a deep neural network under the assumption that the lowest layer uses a high threshold before applying the activation, the upper network can be modeled as a well-behaved polynomial and the input distribution is gaussian.
1 INTRODUCTION
Understanding the landscape of learning neural networks has been a major challege in machine learning. Various works gives parameter recovery guarantees for simple one-hidden-layer networks where the hidden layer applies a non-linear activation u after transforming the input x by a matrix W, and the upper layer is the weighted sum operator: thus f(x) = ∑ aiu(w T i x). However, the networks used in practice have multiple non-linear layers and it is not clear how to extend these known techniques to deeper networks.
We consider a multilayer neural network with the first layer activation u and the layers above represented by an unknown polynomial P such that it has non-zero non-linear components. More precisely, the function f computed by the neural network is as follows:
fW(x) = P (u(w T 1 x), u(w T 2 x), . . . , u(w T d x)) for P (X1, . . . , Xd) = ∑ r∈Zd+ cr · ∏ j X rj j .
We assume that the input x is generated from the standard Gaussian distribution and there is an underlying true network (parameterized by some unknown W∗)1 from which the labels are generated.
In this work we strengthen previous results for one hidden layer networks to a larger class of functions representing the transform made by the upper layer functions if the lowest layer uses a high threshold (high bias term) before applying the activation: u(a − t) instead of u(a). Intuitively, a high threshold is looking for a high correlation of the input a with a direction w∗i . Thus even if the function f is applying a complex transform after the first layer, the identity of these high threshold directions may be preserved in the training data generated using f .
Learning with linear terms in P . Suppose P has a linear component then we show that increasing the threshold t in the lowest layer is equivalent to amplifying the coefficients of the linear part. Instead of dealing with the polynomial P it turns out that we can roughly think of it as P (µX1, ..., µXd) where µ decreases exponentially in t (µ ≈ e−t 2
). As µ decreases it has the effect of diminishing the non-linear terms more strongly so that relatively the linear terms stand out. Taking advantage of this effect we manage to show that if t exceeds a certain threshold the non linear terms drop in value enough so that the directions wi can be learned by relatively simple methods. We show that we can get close to the wi applying a simple variant of PCA. While an application of PCA can be thought of as finding principal directions as the local maxima of max||z||=1 E[f(x)(zTx)2],
1We suppress W when it is clear from context.
we instead perform maxE[f(x)H2(zTx)2]=1 E[f(x)H4(zTx)4]]2. If W∗ has a constant condition number then the local maxima can be used to recover directions that are transforms of wi. Theorem 1 (informal version of Claim 2, Theorem 11). If t > c √ log d for large enough constant c > 0 and P has linear terms with absolute value of coefficients at least 1/poly(d) and all coefficients at most O(1), we can recover the weight vector wi within error 1/poly(d) in time poly(d).
These approximations of wi obtained collectively can be further refined by looking at directions along which there is a high gradient in f ; for monotone functions we show how in this way we can recover wi exactly (or within any desired precision. Theorem 2. (informal version of Theorem 5) Under the conditions of the previous theorem, for monotone P , there exists a procedure to refine the angle to precision in time poly(1/ , d) starting from an estimate that is 1/poly(d) close.
The above mentioned theorems hold for u being sign and ReLU.3
When P is monotone and u is the sign function, learning W is equivalent to learning a union of half spaces. We learn W∗ by learning sign of P which is exactly the union of halfspaces wTi x = t. Thus our algorithm can also be viewed as a polynomial time algorithm for learning a union of large number of half spaces that are far from the origin – to our knowledge this is the first polynomial time algorithm for this problem but with this extra requirement (see earlier work Vempala (2010) for an exponential time algorithm). Refer to Appendix B.6 for more details.
Such linear components in P may easily be present: consider for example the case where P (X) = u(vTX − b) where u is say the sigmoid or the logloss function. The taylor series of such functions has a linear component – note that since the linear term in the taylor expansion of u(x) has coefficient u′(0), for expansion of u(x−b) it will be u′(−b) which is Θ(e−b) in the case of sigmoid. In fact one may even have a tower (deep network) or such sigmoid/logloss layers and the linear components will still be present – unless they are made to cancel out precisely; however, the coefficients will drop exponentially in the depth of the networks and the threshold b.
Sample complexity with low thresholds and no explicit linear terms. Even if the threshold is not large or P is not monotone, we show that W∗ can be learned with a polynomial sample complexity (although possibly exponential time complexity) by finding directions that maximize the gradient of f . Theorem 3 (informal version of Corollary 1). If u is the sign function and wi’s are orthogonal then in poly(1/ , d) samples one can determine W∗ within precision if the coefficient of the linear terms in P (µ(X1 + 1), µ(X2 + 1), µ(X3 + 1), . . .) is least 1/poly(d)
Learning without explicit linear terms. We further provide evidence that P may not even need to have the linear terms – under some restricted cases (section 4), we show how such linear terms may implicitly arise even though they may be entirely apparently absent. For instance consider the case when P = ∑ XiXj that does not have any linear terms. Under certain additional assumptions we show that one can recover wi as long as the polynomial P (µ(X1 +1), µ(X2 +1), µ(X3 +1), ..) (where µ is e−t has linear terms components larger than the coefficients of the other terms). Note that this transform when applied to P automatically introduces linear terms. Note that as the threshold increases applying this transform on P has the effect of gathering linear components from all the different monomials in P and penalizing the higher degree monomials. We show that if W∗ is a sparse binary matrix then we can recover W∗ when activation u(a) = eρa under certain assumptions about the structure of P . When we assume the coefficients are positive then these results extend for binary low l1- norm vectors without any threshold. Lastly, we show that for even activations (∀a, u(a) = u(−a)) under orthogonal weights, we can recover the weights with no threshold.
Learning with high thresholds at deeper layers. We also point out how such high threshold layers could potentially facilitate learning at any depth, not just at the lowest layer. If there is any cut in the network that takes inputs X1, . . . , Xd and if the upper layers operations can be modelled by a polynomial P , then assuming the inputs Xi have some degree of independence we could use this to modularly learn the lower and upper parts of the network separately (Appendix E)
2Here H4 and H2 are the fourth and second order hermite polynomials respectively. 3Theorem 1 holds for sigmoid with t ≥ c log d.
Related Work. Various works have attempted to understand the learnability of simple neural networks. Despite known hardness results Goel et al. (2016); Brutzkus & Globerson (2017), there has been an array of positive results under various distributional assumptions on the input and the underlying noise in the label. Most of these works have focused on analyzing one hidden layer neural networks. A line of research has focused on understanding the dynamics of gradient descent on these networks for recovering the underlying parameters under gaussian input distribution Du et al. (2017b;a); Li & Yuan (2017); Zhong et al. (2017a); Zhang et al. (2017); Zhong et al. (2017b). Another line of research borrows ideas from kernel methods and polynomial approximations to approximate the neural network by a linear function in a high dimensional space and subsequently learning the same Zhang et al. (2015); Goel et al. (2016); Goel & Klivans (2017b;a). Tensor decomposition methods Anandkumar & Ge (2016); Janzamin et al. (2015) have also been applied to learning these simple architectures.
The complexity of recovering arises from the highly non-convex nature of the loss function to be optimized. The main result we extend in this work is by Ge et al. (2017). They learn the neural network by designing a loss function that allows a ”well-behaved” landscape for optimization avoiding the complexity. However, much like most other results, it is unclear how to extend to deeper networks. The only known result for networks with more than one hidden layer is by Goel & Klivans (2017b). Combining kernel methods with isotonic regression, they show that they can provably learn networks with sigmoids in the first hidden layer and a single unit in the second hidden layer in polynomial time. We however model the above layer as a multivariate polynomial allowing for larger representation. Another work Arora et al. (2014) deals with learning a deep generative network when several random examples are generated in an unsupervised setting. By looking at correlations between input coordinates they are able to recover the network layer by layer. We use some of their ideas in section 4 when W is a sparse binary matrix.
Notation. We denote vectors and matrices in bold face. || · ||p denotes the lp-norm of a vector. || · || without subscript implies the l2-norm. For matrices || · || denotes the spectral norm and || · ||F denotes the forbenius norm. N (0,Σ) denotes the multivariate gausssian distribution with mean 0 and covariance Σ. For a scalar x we will use φ(x) to denote the p.d.f. of the univariate standard normal distribution with mean zero and variance 1 .For a vector x we will use φ(x) to denote the p.d.f. of the multivariate standard normal distribution with mean zero and variance 1 in each direction. Φ denotes the c.d.f. of the standard gausssian distribution. Also define Φc = 1 − Φ. Let hi denote the ith normalized Hermite polynomial Wikipedia contributors (2018). For a function f , let f̂i denote the ith coefficient in the hermite expansion of f , that is, f̂i = Eg∼N (0,1)[f(g)hi(g)]. For a given function f computed by the neural network, we assume that the training samples (x, y) are such that x ∈ Rn is distributed according to N (0, 1) and label has no noise, that is, y = f(x). Note: Most proofs are deferred to the Appendix due to lack of space.
2 APPROXIMATE RECOVERY WITH LINEAR TERM
In this section we consider the case when P has a positive linear component and we wish to recover the parameters of true parameters W∗. The algorithm has two-steps: 1) uses existing one-hidden layer learning algorithm (SGD on carefully designed loss Ge et al. (2017)) to recover an approximate solution , 2) refine the approximate solution by performing local search (for monotone P ). The intuition behind the first step is that high thresholds enable P to in expectation be approximately close to a one-hidden-layer network which allows us to transfer algorithms with approximate guarantees. Secondly, with the approximate solutions as starting points, we can evaluate the closeness of the estimate of each weight vector to the true weight vector using simple correlations. The intuition of this step is to correlate with a function that is large only in the direction of the true weight vectors. This equips us with a way to design a local search based algorithm to refine the estimate to small error.
For simplicity in this section we will work with P where the highest degree in any Xi is 1. The degree of the overall polynomial can still be n. See Appendix B.8 for the extension to general P . More formally,
Assumption 1 (Structure of network). We assume that P has the following structure P (X1, . . . , Xk) = c0 + ∑ i∈[d] ciXi + ∑ S⊆[d]:|S|>1 cS ∏ j∈S Xj such that ci = Θ(1)
4 for all i ∈ [d] and for all S ⊆ [d] such that |S| > 1, |cS | ≤ O(1). W∗ has constant condition number.
Thus f(x) = c0 + ∑ i∈[d] ciu((w ∗ i ) Tx) + ∑ S⊆[d]:|S|>1 cS ∏ j∈S u((w ∗ j ) Tx). Denote flin(x) =
c0 + ∑ i∈[d] ciu((w ∗ i ) Tx) to be the linear part of f .
Next we will upper bound expected value of u(x): for ”high-threshold” ReLU, that is, ut(a) = max(0, a − t), Eg∼N(0,σ2)[ut(g)] is bounded by a function ρ(t, σ) ≈ e− t2
2σ2 (see Lemma 10). We also get a lower bound on |û4| in terms of ρ(t, σ) 5 This enables us to make the following assumption. Assumption 2. Activation function u is a positive high threshold activation with threshold t, that is, the bias term is t. Eg∼N(0,σ2)[ut(g)] ≤ ρ(t, σ) where ρ is a positive decreasing function of t. Also, |ûk| = tΘ(1)ρ(t, 1) for k = 2, 4. Assumption 3 (Value of t). t is large enough such that ρ(t, ||W∗||) ≈ d−η and ρ(t, 1) ≈ d−pη with for large enough constant η > 0 and p ∈ (0, 1].
For example, for high threshold ReLU, ρ(t, 1) = e−t 2/2 and µ = ρ(t, ||W∗||) = e−t2/2||W∗||2 , thus t = √ 2η log d for large enough d suffices to get the above assumption (κ(W∗) is a constant).
These high-threshold activation are useful for learning as in expectation, they ensure that f is close to flin since the product terms have low expected value. This is made clear by the following lemmas: Lemma 1. For |S| > 1, under Assumption 2 we have,
E ∏ j∈S ut((w ∗ j ) Tx) ≤ ρ(t, 1) (κ(W∗)ρ(t, ||W∗||))|S|−1 . So if µ := κ(W∗)ρ(t, ||W∗||), then E[ ∏ j∈S Xj [x]] ≤ ρ(t, 1)µ|S|−1
Lemma 2. Let ∆(x) = f(x) − flin(x). Under Assumptions 1, 2 and 3, if t is such that dρ(t, ||W∗||) ≤ c for some small enough constant c > 0 we have,
E[|∆(x)|] ≤ O ( d3ρ(t, 1)ρ(t, ||W∗||) ) = O ( d−(1+p)η+3 ) .
Note: We should point out that f(x) and flin(x) are very different point wise; they are just close in expectation under the distribution of x. In fact, if d is some constant then even the difference in expectation is some small constant.
This closeness suggests that algorithms for recovering under the labels from flin can be used to recover with labels from f approximately.
Learning One Layer Neural Networks using Landscape Design. Ge et al. (2017) proposed an algorithm for learning one-hidden-layer networks. Intuitively, the approach of Ge et al. (2017) is to design a well behaved loss function based on correlations to recover the underlying weight vectors. They show that the local minima of the following optimization corresponds to some transform of each of the w∗i – thus it can be used to recover a transform of w ∗ i , one at a time.
max z:E[flin(x)H2(zTx)]=û2
sgn(û4)E[flin(x)H4(zTx)]
which they optimize using the Lagrangian formulation (viewed as a minimization):
min z
Glin(z) := −sgn(û4)E[flin(x)H4(zTx)] + λ(E[flin(x)H2(zTx)]− û2)2
where H2(zTx) = ||z||2h2 ( zTx ||z|| ) = (z Tx)2√ 2 − ||z|| 2 √ 2 and H4(zTx) = ||z||4h4 ( zTx ||z|| ) = √ 6 (z Tx)4
12 − ||z||2(zTx)2 2 + ||z||4 4 (see Appendix A.1 for more details). Using properties
4We can handle ∈ [d−C , dC ] for some constant C by changing the scaling on t. 5For similar bounds for sigmoid and sign refer to Appendix B.7.
of Hermite polynomials, we have E[flin(x)H2(zTx)] = û2 ∑ i ci(z Tw∗i ) 2 and similarly
E[flin(x)H4(zTx)] = û4 ∑ i(z Tw∗i ) 4. Thus
Glin(z) = −|û4| ∑ i ci(z Tw∗i ) 4 + λû22 (∑ i ci(z Tw∗i ) 2 − 1 )2 .
Using results from Ge et al. (2017), it can be shown that the approximate local minima of this problem are close to columns of (TW∗)−1 where T is a diagonal matrix with Tii = √ ci.
Definition 1 (( , τ)-local minimum/maximum). z is an ( , τ)-local minimum of F if ||∇F (z)|| ≤ and λmin(∇2F (z)) ≤ τ . Claim 1 (Ge et al. (2017)). An ( , τ)-local minima of the Lagrangian formulation z with ≤ O (√ τ3/|û4| )
is such that for an index i |zTwi| = 1 ± O( /λû22) ± O(dτ/|û4|) and ∀j 6= i, |vTwj | = O( √ τ/|û4|) where wi are columns of (TW∗)−1.
Ge et al. (2017) do not mention û2 but it is necessary in the non-orthogonal weight vectors case for the correct reduction. Since for us, this value can be small, we mention the dependence.Note that these are not exactly the directions w∗i that we need, one way to think about is that we can get the correct directions by estimating all columns and then inverting.
One-hidden-layer to Deep Neural Network. Consider the loss with f instead of flin:
min z : G(z) = −sgn(û4)E[f(x)H4(zTx)] + λ(E[f(x)H2(zTx)]− û2)2
We previously showed that f is close to flin in expectation due to the high threshold property. This also implies that Glin and G are close and so are the gradients and (eignevalues of) hessians of the same. This closeness implies that the landscape properties of one approximately transfers to the other function. More formally, Theorem 4. Let Z be an ( , τ)-local minimum of functionA. If ||∇(B−A)(Z)|| ≤ ρ and ||∇2(B− A)(Z)|| ≤ γ then Z is an ( + ρ, τ + γ)-local minimum of function B and vice-versa.
We will now apply above lemma on our Glin(z) and G(z). Claim 2. For λ = Θ(|û4|/û22) ≈ dη , an ( , τ)-approximate local minima of G (for small enough , τ ≤ d−2η) is an (O(log d)d−(1+p)η+3, O(log d)d−(1+p)η+3)-approximate local minima of Glin. This implies z is such that for an index i, |zTwi| = 1 ± O(1)d−2/3pη+3 and ∀j 6= i, |zTwj | = O(1)d−1/3pη+3/2 where wi are columns of (TW∗)−1 (ignoring log d factors). Note: For ReLU, setting t = √ C log d for large enough C > 0 we can get closeness 1/poly(d) to the columns of (TW∗)−1. Refer Appendix B.7 for details for sigmoid.
The paper Ge et al. (2017) also provides an alternate optimization that when minimized simultaneously recovers the entire matrix W∗ instead of having to learn columns of (TW∗)−1 separately. We show how applying our methods can also be applied to that optimization in Appendix B.4 to recover W∗ by optimizing a single objective.
2.1 APPROXIMATE TO ARBITRARILY CLOSE FOR MONOTONE P
Assuming P is monotone, we can show that the approximate solution from the previous analysis can be refined to arbitrarily closeness using a random search method followed by approximately finding the angle of our current estimate to the true direction.
The idea at a high level is to correlate with δ′(zTx − t) where δ is the Dirac delta function. It turns out that the correlation is maximized when z is equal to one of the wi. Correlation with δ′(zTx−t) is checking how fast the correlation of f with δ(zTx−t) is changing as you change t. To understand this look at the case when our activation u is the sign function then note that correlation of ut(wTx− t) with δ′(wTx− t) is very high as its correlation with δ(wTx− t′) is 0 when t′ < t and significant when t′ > t. So as we change t’ slightly from t− to t+ there is a sudden increase. If z and w differ then it can be shown that correlation of ut(wTx− t) with δ′(zTx− t) essentially depends on cot(α) where α is the angle between w and z (for a quick intuition note that one can
prove that E[ut(wTx)δ′(zTx)] = c cot(α). See Lemma 16 in Appendix). In the next section we will show how the same ideas work for non-monotone P even if it may not have any linear terms but we only manage to prove polynomial sample complexity for finding w instead of polynomial time complexity.
In this section we will not correlate exactly with δ′(zTx− t) but instead we will use this high level idea to estimate how fast the correlation with δ(zTx − t′) changes between two specific values as one changes t′, to get an estimate for cot(α). Secondly since we can’t to a smooth optimization over z, we will do a local search by using a random perturbation and iteratively check if the correlation has increased. We can assume that the polynomial P doesn’t have a constant term c0 as otherwise it can easily be determined and cancelled out6.
We will refine the weights one by one. WLOG, let us assume that w∗1 = e1 and we have z such that zTw∗1 = z1 = cos −1(α1). Let l(z, t, ) denote {x : zTx ∈ [t− , t]} for z ∈ Sn−1.
Algorithm 1 RefineEstimate 1: Run EstimateTanAlpha on z to get s = tan(α) where α is the angle between z and w∗1 . 2: Perturb current estimate z by a vector along the d− 1 dimensional hyperplane normal to z with
the distribution n(0,Θ(α/d))d−1 to get z′. 3: Run EstimateTanAlpha on z′ to get s′ = tan(α′) where α′ is the angle between z′ and w∗1 . 4: if α′ ≤ O(α/d) then 5: z ← z′ 6: Repeat till α′ ≤ .
Algorithm 2 EstimateTanAlpha 1: Find t1 and t2 such that Pr[sgn(f(x))|x ∈ l(z, t′, )] at t1 is 0.4 and at t2 is 0.6. 2: Return t2−t1Φ−1(0.6)−Φ−1(0.4) .
The algorithm (Algorithm 1) estimates the angle of the current estimate with the true vector and then subsequently perturbs the vector to get closer after each successful iteration.
Theorem 5. Given a vector z ∈ Sd−1 such that it is 1/poly(d)-close to the underlying true vector w∗1 , that is cos
−1(zTw∗1) ≤ 1/poly(d), running RefineEstimate for O(T ) iterations outputs a vector z∗ ∈ Sd−1 such that cos−1((z∗)Tw∗1) ≤ ( 1− cd )T γ for some constant c > 0. Thus after O(d log(1/ )) iterations cos−1((z∗)Tw∗1) ≤ .
We prove the correctness of the algorithm by first showing that EstimateTanAlpha gives a multiplicative approximation to tan(α). The following lemma captures this property.
Lemma 3. EstimateTanAlpha(z) outputs y such that y = (1 ± O(η)) tan(α) where α is the angle between z and w∗1 .
Proof. We first show that the given probability when computed with sgn(xTw∗1−t) is a well defined function of the angle between the current estimate and the true parameter up to multiplicative error. Subsequently we show that the computed probability is close to the one we can estimate using f(x) since the current estimate is close to one direction. The following two lemmas capture these properties.
Lemma 4. For t, t′ and ≤ 1/t′, we have Pr[xTw∗1 ≥ t and x ∈ l(z, t′, )|x ∈ l(z, t, )] = Φc ( t− t∗ cos(α1) | sin(α1)| ) ±O( )t′
Lemma 5. For t′ ∈ [0, t/ cos(α1)], we have
Pr[sgn(f(x))|x ∈ l(z, t′, )] = Pr[sgn((w∗1)Tx− t)|x ∈ l(z, t, )] + de−Ω(t 2).
6for example with RELU activation, f will be c0 most of the time as other terms in P will never activate. So c0 can be set to say the median value of f .
Using the above, we can show that, t2 − t1 = ( Φ−1(0.6− η1 ±O( )t1)− Φ−1(0.4− η2 ±O( )t2) ) tan(α)
= ( Φ−1(0.6)− Φ−1(0.4)− (η1 ±O( )t1)(Φ−1)′(p1) + (η2 ±O( )t2)(Φ−1)′(p2) ) tan(α)
where η1, η2 > 0 are the noise due to estimating using f and p1 ∈ [0.6 − η1 ± O( )t1, 0.6] and p2 ∈ [0.4 − η2 ± O( )t2, 0.4] as long as t1, t2 ∈ [0, t/ cos(α1)]. The following lemma bounds the range of t1 and t2.
Lemma 6. We have 0 ≤ t1 ≤ t2 ≤ tcos(α1) .
Thus, we have, t2 − t1
Φ−1(0.6)− Φ−1(0.4) = (1±O (η1 + η2 + t2)) tan(α)
as long as η2+O( )t2 ≤ c for some constant c > 0. Thus, we can get a multiplicative approximation to tan(α) up to error η ( can be chosen to make its contribution smaller than η).
Finally we show (proof in Appendix ??) that with constant probability, a random perturbation reduces the angle by a factor of (1 − 1/d) of the current estimate hence the algorithm will halt after O(d log(1/ν)) iterations.
Lemma 7. By applying a random Gaussian perturbation along the d − 1 dimensional hyperplane normal to z with the distribution n(0,Θ(α/d))d−1 and scaling back to the unit sphere, with constant probability, the angle α (< π/2) with the fixed vector decreases by at least Ω(α/d).
3 SAMPLE COMPLEXITY
We extend the methods of the previous section to a broader class of polynomials but only to obtain results in terms of sample complexity. The main idea as in the previous section is to correlate with δ′(zTx−t) (the derivative of the dirac delta function) and find arg max||z||2=1 E[f(x)δ
′(zTx−t)]. We will show that the correlation goes to infinity when z is one of w∗i and bounded if it is far from all of them. From a practical standpoint we calculate δ′(zTx − s) by measuring correlation with 1 2 (δ(z
Tx− s+ )− δ(zTx− s− ). In the limit as → 0 this becomes δ′(zTx− s). δ(zTx− s) in turn is estimated using 1 (sgn(z
Tx− s+ )− sgn(zTx− s)), as in the previous section, for an even smaller ; however, for ease of exposition, in this section, we will assume that correlations with δ(zTx− s) can be measured exactly. Let us recall that f(x) = P (u((w∗1) Tx), u((w∗2) Tx), . . . , u((w∗d)
Tx)). Let C1(f, z, s) denote E[f(x)δ(zTx− s)] and let C2(f, z, s) denote E[f(x)(δ(zTx− s− )− δ(zTx− s+ )].
If u = sgn then P has degree at most 1 in each Xi. Let ∂P∂Xi denote the symbolic partial derivative of P with respect to Xi; so, it drops monomials without Xi and factors off Xi from the remaining ones. Let us separate dependence on Xi in P as follows:
P (X1, , .., Xd) = XiQi(X1, ..Xi−1, Xi+1, .., Xd) +R1(X1, .Xi−1, Xi+1, .., Xd)
then ∂P∂Xi = Qi.
We will overload the polynomial P such that P [x] to denote the polynomial computed by substituting Xi = u((w∗1)
Tx) and similarly for Q and R. Under this notation f(x) = P [x]. We will also assume that |P (X)| ≤ ||X||O(1) = ||X||c1 (say). By using simple correlations we will show: Theorem 6. If u is the sgn function, P (X) ≤ ||X||c1 and for all i, E[Qi[x]|(w∗i )Tx = t] ≥ 3 then using poly( d 3 2 ) samples one can determine the w ∗ i ’s within error 2. 7
Note that if all the w∗i ’s are orthogonal then Xi are independent and E [ Qi[x] ∣∣(w∗i )Tx = t] is just value ofQi evaluated by settingXi = 1 and setting all the the remainingXj = µwhere µ = E[Xj ]. This is same as 1/µ times the coefficient of Xi in P (µ(X1 + 1), . . . , µ(Xd + 1)).
7The theorem can be extended to ReLU by correlating with the second derivative δ′′ (see Appendix C.1).
Corollary 1. If u is the sgn function and w∗i s are orthogonal then in sample complexity poly( d 3 2
) one can determine W∗ within error 2 in each entry, if the coefficient of the linear terms in P (µ(X1 + 1), µ(X2 + 1), µ(X3 + 1), ..) is larger than 3µ, where µ = E[Xi].
The main point behind the proof of Theorem 6 is that the correlation is high when z is along one of w∗i and negligible if it is not close to any of them.
Lemma 8. Assuming P (X) < ||X||c1 . If z = w∗i then C2(f, z, t) = φ(t)E [ ∂P ∂Xi ∣∣∣zTx = t] + dO(1). Otherwise if all angles αi between z and w∗i are at least 2 it is at most d O(1)/ 2.
We will use the notation g(x)x=s to denote g(x) evaluated at x = s. Thus Cauchy’s mean value theorem can be stated as g(x + ) − g(x) = [g′(s)](s = s′ ∈ [x, x + ]). We will over load the notation a bit: φ(zTx = s) will denote the probability density that vzTx = s; so if z is a unit vector this is just φ(s); φ(zT1 x = s1, z T 2 x = s2) denotes the probability density that both zT1 x = s1, z T 2 x = s2; so again if z1, z2 are orthonormal then this is just φ(s1)φ(s2).
The following claim interprets correlation with δ(zTx − s) as the expected value along the corresponding plane zTx = s. Claim 3. E[f(x)δ(zTx− s)] = E[f(x)|zTx = s]φ(zTx = s).
The following claim computes the correlation of P with δ′(zTx− s). Claim 4. E[P [x]δ′(zTx = s)] is equal to ∑ i | cot(αi)|φ(zTx = s, (w∗i )Tx = t)
E [ ∂P ∂Xi [x]|zTx = s, (w∗i )Tx = t ] + φ′(s)E[P [x]|zTx = s].
We use this to show that the correlation is bounded if all the angles are lower bounded. Claim 5. If P (X) ≤ ||X||c1 and if z has an angle of at least 2 with all the w∗i ’s then C2(f, z, s) ≤ dO(1)/ 2.
Above claims can be used to prove main Lemma 8. Refer to the Appendix C for proofs.
Proof of Theorem 6. If we wish to determine w∗i within an angle of accuracy 2 let us set to be O( 3 2φ(t)d
−c). From Lemma 8, for some large enough c, this will ensure that if all αi > 2 the correlation is o(φ(t) 3). Otherwise it is φ(t) 3(1±o(1)). Since φ(t) = poly(1/d), given poly( d 2 3 ) samples, we can test if a given direction is within accuracy 2 of a w∗i or not.
4 STRONGER RESULTS UNDER STRUCTURAL ASSUMPTIONS
Under additional structural assumptions on W∗ such as the weights being binary, that is, in {0, 1}, sparsity or certain restrictions on activation functions, we can give stronger recovery guarantees. Proofs have been deferred to Appendix D.
Theorem 7. For activation ut(a) = eρ(a−t). Let the weight vectors w∗i be 0, 1 vectors that select the coordinates of x. For each i, there are exactly d indices j such that wij = 1 and the coefficient of the linear terms in P (µ(X1 + 1), µ(X2 + 1), µ(X3 + 1), ..) for µ = e−ρt is larger than the coefficient of all the product terms (constant factor gap) then we can learn the W∗.
In order to prove the above, we will construct a correlation graph over x1, . . . , xn and subsequently identify cliques in the graph to recover w∗i ’s.
With no threshold, recovery is still possible for disjoint, low l1-norm vector. The proof uses simple correlations and shows that the optimization landscape for maximizing these correlations has local maximas being w∗i ’s. Theorem 8. For activation u(a) = ea. If all w∗i ∈ {0, 1}n are disjoint, then we can learn w∗i as long as P has all positive coefficients and product terms have degree at most 1 in each variable.
For even activations, it is possible to recover the weight vectors even when the threshold is 0. The technique used is the PCA like optimization using hermite polynomials as in Section 2. Denote C(S, µ) = ∑ S⊆S′⊆[n] cS′µ |S′|.
Theorem 9. If the activation is even and for every i, j: C({i}, û0) + C({j}, û0) > 6û22 û0û4 C({i, j}, û0) then there exists an algorithm that can recover the underlying weight vectors.
5 CONCLUSION
In this work we show how activations in a deep network that have a high threshold make it easier to learn the lowest layer of the network. We show that for a large class of functions that represent the upper layers, the lowest layer can be learned with high precision. Even if the threshold is low we show that the sample complexity is polynomially bounded. An interesting open direction is to apply these methods to learn all layers recursively. It would also be interesting to obtain stronger results if the high thresholds are only present at a higher layer based on the intuition we discussed.
A PREREQUISITES
A.1 HERMITE POLYNOMIALS
Hermite polynomials form a complete orthogonal basis for the gaussian distribution with unit variance. For more details refer to Wikipedia contributors (2018). Let hi be the normalized hermite polynomials. They satisfy the following,
Fact 0. E[hn(x)] = 0 for n > 0 and E[h0(x)] = 1.
Fact 1. Ea∼N(0,1)[hi(a)hj(a)] = δij where δij = 1 iff i = j.
This can be extended to the following:
Fact 2. For a, b with marginal distribution N(0, 1) and correlation ρ, E[hi(a)hj(b)] = δijρj .
Consider the following expansion of u into the hermite basis (hi),
u(a) = ∞∑ i=0 ûihi(a).
Lemma 9. For unit norm vectors u, v, E[u(vTx)hj(wTx)] = ûj(vTw)j .
Proof. Observe that vTx and wTx have marginal distribution N(0, 1) and correlation vTw. Thus using Fact 2,
E[u(vTx)hj(wTx)] = ∞∑ i=1 ûiE[hi(vTx)hj(wTx)] = ∞∑ i=1 ûiδij(v Tw)j = ûj(v Tw)j .
For gaussians with mean 0 and variance σ2 define weighted hermite polynomials Hσl (a) = |σ|lhl(a/σ). Given input vTx for x ∼ N(0, I), we suppress the superscript σ = ||v||. Corollary 2. For a non-zero vector v (not necessarily unit norm) and a unit norm vector w, E[Hi(vTx)hj(wTx)] = δij(vTw)j .
Proof. It follows as the proof of the previous lemma,
E[u(vTx)hj(wTx)] = ∞∑ i=1 ûiE[hi(vTx)hj(wTx)] = ∞∑ i=1 ûiδij(v Tw)j = ûj(v Tw)j .
Fact 3. hn(x+ y) = 2− n 2 ∑n k=0 ( n k ) hn−k(x √ 2)hk(y √ 2). Fact 4. hn(γx) = ∑bn2 c k=0 γ n−2k(γ2 − 1)k ( n 2k ) (2k)! k! 2 −khn−2k(x).
Fact 5. α(n,m, γ) = E[hm(x)hn(γx)] = γn−2k(γ2 − 1)k ( n 2k ) (2k)! k! 2 −k for k = n−m2 if k ∈ Z + else 0.
A.2 PROPERTIES OF MATRICES
Consider matrix A ∈ Rm×m. Let σi(A) to be the ith singular value of A such that σ1(A) ≥ σ2(A) ≥ . . . ≥ σm(A) and set κ(A) = σ1(A)/σm(A).
Fact 6. |det(A)| = ∏m i=1 σi(A).
Fact 7. Let B be a (mk)× (mk) principal submatrix of A, then κ(B) ≤ κ(A).
A.3 ACTIVATION FUNCTIONS
Lemma 10. For u being a high threshold ReLU, that is, ut(a) = max(0, a− t) we have for t ≥ C for large enough constant C > 0, Eg∼N(0,σ2)[ut(g)] ≤ e− t2 2σ2 . Also, û4, û2 = tΘ(1)e− t2 2 .
Proof. We have
Eg∼N(0,σ2)[ut(g)] = 1√ 2πσ ∫ ∞ −∞ max(0, g − t)e− g2 2σ2 dg
= 1√ 2πσ ∫ ∞ t (g − t)e− g2 2σ2 dg
≤ 1√ 2πσ ∫ ∞ t ge− g2 2σ2 dg
= σ√ 2π ∫ ∞ t2
2σ2
e−hdh
= σ√ 2π e− t2 2σ2 .
Also,
û4 = Eg∼N(0,1)[ut(g)h4(g)]
= 1√ 2π ∫ ∞ −∞ max(0, g − t)(g4 − 6g2 + 3)e− g2 2 dg
= 1√ 2π ∫ ∞ t (g − t)(g4 − 6g2 + 3)e− g2 2 dg
≥ 1√ 2π (t4 − 6t2)1 t e− t2 2 −1− 1 2t2
≥ Ω ( t3e− t2 2 ) .
To upper bound,
û4 = 1√ 2π ∫ ∞ −∞ max(0, g − t)(g4 − 6g2 + 3)e− g2 2 dg
= 1√ 2π ∫ ∞ t (g − t)(g4 − 6g2 + 3)e− g2 2 dg
≤ 1√ 2π ∫ ∞ t 2g5e− g2 2 dg
= 1√ 2π ∫ ∞ t2
2
h2e−hdh
= O ( t4e− t2 2 ) .
Similar analysis holds for û2.
Observe that sgn can be bounded very similarly replacing g− t by 1 which can affect the bounds up to only a polynomial in t factor.
Lemma 11. For u being a high threshold sgn, that is, ut(a) = sgn(a − t) we have for t ≥ C for large enough constant C > 0, Eg∼N(0,σ2)[ut(g)] ≤ e− t2 2σ2 . Also, û4, û2 = tΘ(1)e− t2 2 .
For sigmoid, the dependence varies as follows:
Lemma 12. For u being a high threshold sigmoid, that is, ut(a) = 11+e−(a−t) we have for t ≥ C for large enough constant C > 0, Eg∼N(0,σ2)[ut(g)] ≤ e−t+ σ2 2 . Also, û4, û2 = Θ(e−t).
Proof. We have
Eg∼N(0,σ2)[ut(g)] = 1√ 2πσ ∫ ∞ −∞
1
1 + e−(g−t) e−
g2 2σ2 dg
= e−t√ 2πσ ∫ ∞ −∞
1
e−t + e−g e−
g2 2σ2 dg
≤ e −t
√ 2πσ ∫ ∞ −∞ ege− g2 2σ2 dg
= e−te
σ2
2
√ 2πσ ∫ ∞ −∞ e− (g−σ2)2 2σ2 dg
= e−te σ2 2
Also,
û4 = Eg∼N(0,1)[ut(g)h4(g)]
= 1√ 2π ∫ ∞ −∞
1
1 + e−(g−t) e−
g2
2 dg
= e−t√
2π ∫ ∞ −∞
1
e−t + e−g (g4 − 6g2 + 3)e−
g2
2 dg
≥ e −t √
2π ∫ ∞ 0
1
e−t + e−g (g4 − 6g2 + 3)e−
g2
2 dg
≥ e −t √
2π ∫ ∞ 0 1 2 (g4 − 6g2 + 3)e− g2 2 dg
= Ω(e−t).
We can upper bound similarly and bound û2.
B APPROXIMATE RECOVERY WITH LINEAR TERMS
B.1 CONSTRAINED OPTIMIZATION VIEW OF LANDSCAPE DESIGN
Let us consider the linear case with w∗i ’s are orthonormal. Consider the following maximization problem for even l ≥ 4,
max z∈Sn−1
sgn(ûl) · E [ f(x) ·Hl ( zTx )] where hl is the lth hermite polynomial. Then we have,
sgn(ûl) · E [ f(x) · hl ( zTx )] = sgn(ûl) · E [( k∑ i=1 ciut((w ∗ i ) Tx) ) · hl ( zTx )]
= sgn(ûl) · k∑ i=1 ciE [ ut((w ∗ i ) Tx) · hl ( zTx )] = |ûl|
k∑ i=1 ci((w ∗ i ) Tz)l.
It is easy to see that for z ∈ Sn−1, the above is maximized at exactly one of the wi’s (up to sign flip for even l) for l ≥ 3 as long as ul 6= 0. Thus, each wi is a local minima of the above problem.
Let L(z) = − ∑k i=1 ciz l i. For constraint ||z||2 = 1, we have the following optimality conditions (see Nocedal & Wright (2006) for more details).
First order:
∇L(z)− z T∇L(z) ||z||2 z = 0 and ||z||2 = 1.
This applied to our function gives us that for λ = − ∑ i ciz l i
||z||2 (λ < 0),
−lcizl−1i − 2λzi = 0
The above implies that either zi = 0 or zl−2i = − λlci with ||z||2 = 1. For this to hold z is such that for some set S ⊆ [n], |S| > 1, only i ∈ S have zi 6= 0 and ∑ i∈S z 2 i = 1. This implies that for all i ∈ S, zl−2i = − 2λlci .
Second order:
For all w 6= 0 such that wTz = 0,wT (∇2L(z)− 2λI)w ≥ 0. For our function, we have:
∇2L(z) = −l(l − 1)diag(c · z)l−2 =⇒ (∇2L(z))ij = {
2(l − 1)λ if i = j and i ∈ S 0 otherwise.
The last follows from using the first order condition. For the second order condition to be satisfied we will show that |S| = 1. Suppose |S| > 2, then choosing w such that wi = 0 for i 6∈ S and such that wTz = 0 (it is possible to choose such a value since |S| > 2), we get wT (∇2L(z)− 2λI)w = 2(l − 2)λ||w||2 which is negative since λ < 0, thus these cannot be global minima. However, for |S| = 1, we cannot have such a w, since to satisfy wTz = 0, we need wi = 0 for all i ∈ S, this gives us wT (∇2L(z)− 2λI)w = −2λ||w||2 which is always positive. Thus z = ±ei are the only local minimas of this problem.
B.2 IMPORTANT RESULTS FROM GE ET AL. (2017) Lemma 13 (Ge et al. (2017)). If z is an ( , τ)-local minima of F (z) = − ∑ i αiz 4 i +λ( ∑ i z 2 i −1)2
for ≤ √ τ3/αmin where αmin = mini αi, then
• (Lemma 5.2) |z|2nd ≤ √
τ αmin where |z|2nd denotes the magnitude of the second largest entry in terms of magnitude of z.
• (Derived from Proposition 5.7) zmax = ±1± O(dτ/αmin)± O( /λ) where |z|max is the value of the largest entry in terms of magnitude of z.
B.3 OMITTED PROOFS FOR ONE-BY-ONE RECOVERY
Proof of Lemma 1. Let O ∈ Rd×d be the orthonormal basis (row-wise) of the subspace spanned by w∗i for all i ∈ [d] generated using Gram-schmidt (with the procedure done in order with elements of |S| first). Now let OS ∈ R|S|×d be the matrix corresponding to the first S rows and let O⊥S ∈ R(d−|S|)×n be that corresponding to the remaining rows. Note that OW∗ (W∗ also has the same ordering) is an upper triangular matrix under this construction.
E ∏ j∈S ut((w ∗ j ) Tx) = 1
(2π)n/2 ∫ x ∏ i∈S ut(x Tw∗i )e − ||x|| 2 2 dx
= 1
(2π)n/2 ∫ x ∏ i∈S ut((OSw ∗ i ) TOSx)e − ||OSx|| 2+||O⊥S x|| 2 2 dx
=
( 1
(2π) |S| 2 ∫ x′∈R|S| ∏ i∈S ut((OSw ∗ i ) Tx′)e− ||x′||2 2 dx′ )( 1 (2π) d−|S| 2 ∫ x′∈Rd−|S| e− ||x′||2 2 dx′ )
= 1
(2π) |S| 2 ∫ x′∈R|S| ∏ i∈S ut((OSw ∗ i ) Tx′)e− ||x′||2 2 dx′
= |det(OSW∗S)|−1
(2π) |S| 2
∫ b∈R|S| ∏ i∈S ut(bi)e − ||(OSW ∗ S) −T b||2 2 db
Now observe that OSW∗S is also an upper triangular matrix since it is a principal sub-matrix of OW∗. Thus using Fact 6 and 7, we get the last equality. Also, the single non-zero entry row has non-zero entry being 1 (||w∗i || = 1 for all i). This gives us that the inverse will also have the single non-zero entry row has non-zero entry being 1. WLOG assume index 1 corresponds to this row. Thus we can split this as following
E ∏ j∈S ut((w ∗ j ) Tx) ≤ |det(OSW∗S)|−1 ( 1√ 2π ∫ b1 ut(b1)e − b 2 1 2 db1 ) ∏ i∈S\{1} 1√ 2π ∫ bi ut(bi)e − b 2 i 2||OSW ∗ S ||2 dbi
≤ |det(OSW∗S)|−1 ( 1√ 2π ∫ b1 ut(b1)e − b 2 1 2 db1 ) ∏ i∈S\{1} 1√ 2π ∫ bi ut(bi)e − b 2 i ||W∗||2 dbi
≤ ρ(t, 1) (κ(W∗)ρ(t, ||W∗||))|S|−1
Proof of Claim 1. Consider the SVD of matrix M = UDUT . Let W = UD−1/2 and yi =√ ciW Tw∗i for all i. It is easy to see that yi are orthogonal. Let F (z) = G(Wz):
F (z) = |û4| ∑ i ci(z TWTw∗i ) 4 − λû22 (∑ i ci(z TWTw∗i ) 2 − 1 )2
= |û4| ∑ i 1 ci (zTyi) 4 − λû22 (∑ i (zTyi) 2 − 1 )2 .
Since yi are orthogonal, for means of analysis, we can assume that yi = ei, thus the formulation reduces to maxz |û4| ∑ i 1 ci (zi) 4 − λ′ ( ||z||2 − 1 )2 up to scaling of λ′ = λû22. Note that this is of the form in Lemma 13 hence using that we can show that the approximate local minimas of F (z) are close to yi and thus the local maximas of G(z) are close to Wyi = √ ciWW Tw∗i = √ ciM
−1w∗i due to the linear transformation. This can alternately be viewed as the columns of (TW∗)−1 since TW∗M−1(TW∗)T = I.
Proof of Theorem 4. Let Z be an ( , τ)-local minimum of A, then we have ||∇A(Z)|| ≤ and λmin(∇2A(Z)) ≥ −τ . Observe that
||∇B(Z)|| = ||∇(A+ (B −A)(Z)|| ≤ ||∇A(Z)||+ ||∇(B −A)(Z)|| ≤ + ρ.
Also observe that
λmin(∇2B(Z)) = λmin(∇2(A+ (B −A))(Z)) ≥ λmin(∇2A(Z)) + λmin(∇2(B −A)(Z)) ≥ −τ − ||∇2(B −A)(Z)|| ≥ −τ − γ
Here we use |λmin(M)| ≤ ||M|| for any symmetric matrix. To prove this, we have ||M|| = maxx∈Sn−1 ||Mx||. We have x = ∑ i xivi where vi are the eigenvectors. Thus we have Mx =∑
i xiλi(M)vi and ∑ x2i = 1. Which gives us that ||M|| = √∑ i x 2 iλ 2 i (M) ≥ |λmin(M)|.
Proof of Lemma 2. Expanding f , we have
E[|∆(x)|] = E ∣∣∣∣∣∣ ∑
S⊆[d]:|S|>1 cS ∏ j∈S ut((w ∗ j ) Tx) ∣∣∣∣∣∣
≤ ∑
S⊆[d]:|S|>1
|cS |E ∏ j∈S ut((w ∗ j ) Tx) using Lemma 1 ≤ C
∑ S⊆[d]:|S|>1 ρ(t, 1) ( 1 σmin(W∗) ρ(t, ||W∗||) )|S|−1
= C d∑ i=1 ( d i ) ρ(t, 1) ( 1 σmin(W∗) ρ(t, ||W∗||) )i−1
using ( d
i
) ≤ di ≤ C d∑ i=1 dρ(t, 1) ( d σmin(W∗) ρ(t, ||W∗||) )i−1 using assumption on t ≤ Cd2ρ(t, 1) ( d
σmin(W∗) ρ(t, ||W∗||)
)
Lemma 14. For any function L such that ||L(z,x)|| ≤ C(z)||x||O(1) where C is a function that is not dependent on x, we have ||E[∆(x)L(x)]|| ≤ C(z)d−(1+p)η+3O(log d).
Proof. We have
||E[∆(x)L(x)]|| ≤ E[|∆(x)||L(x)||] ≤ E[|∆(x)C(z)||x||O(1)]
= C(z) ( E[|∆(x)| ||x||O(1)| ||x|| ≥ c]Pr[||x|| ≥ c]
+ E[|∆(x)| ||x||O(1)| ||x|| < c]Pr[||x|| < c] )
≤ C(z)(E[||x||O(1)|||x|| ≥ c]Pr[||x|| ≥ c] + cE[|∆(x)|])
= C(z)(cO(1)e− c2 2 + cO(1)E[|∆(x)|]).
Now using Lemma 2 to bound E[|∆(x)|], for c = Θ( √ η log d we get the required result.
Lemma 15. For ||z|| = Ω(1) and λ = Θ(|û4|/û22) ≈ dη , ||∇G(z)|| ≥ Ω(1)d−η .
Proof. Let K = κ(W∗) which by assumption is θ(1). We will argue that local minima of G cannot have z with large norm. First lets argue this for Glin(z). We know that Glin(z) = −α ∑ (zTw∗i )
4 + λβ2(( ∑ (zTw∗i )
2)− 1)2 where α = |û4| and β = û2. We will argue that zT∇Glin(z) is large if z is large.
zT∇Glin(z) = −4α ∑ (zTw∗i ) 3(zTw∗i ) + 2λβ 2 (∑ (zTw∗i ) 2 − 1 )(∑ 2(zTw∗i )(z Tw∗i ) )
= −4α ∑
(zTw∗i ) 4 + 4λβ2 (∑ (zTw∗i ) 2 − 1 )(∑ (zTw∗i ) 2 )
Let y = W∗z then K||z|| ≥ ||y|| ≥ ||z||/K since K is the condition number of W∗. Then this implies
zT∇Glin(z) = −4α ∑ y4i + 4λβ 2(||y||2 − 1)||y||2
= 4||y||2((−α+ λβ2)||y||2 + λβ2) ≥ ||y||4(−α+ λβ2) ≥ Ω(1)d−η||y||4
Since ||y|| ≥ ||z||/K = Ω(1) by assumptions on λ, z we have zT∇Glin(z) ≥ Ω(λβ2||y||4) = Ω(1)d−η||z||4. This implies ||∇Glin(z)|| = Ω(1)d−η||z||3. Now we need to argue for G. G(z)−Glin(z) = −sgn(û4)E[(flin(x) + ∆(x))H4(zTx)] + λ(E[(flin(x) + ∆(x))H2(zTx)]− β)2
+ sgn(û4)E[(flin(x))H4(zTx)]− λE[(flin(x))H2(zTx)]− β]2
= −sgn(û4)E[∆(x)H4(zTx)] + λE[∆(x)H2(zTx)]2 + 2λE[∆(x)H2(zTx)]E[flin(x)H2(zTx)− β] = −sgn(û4)||z||4E[∆(x)h4(zTx/||z||)] + λ||z||4E[∆(x)h2(zTx/||z||)]2
+ 2λ||z||4E[∆(x)h2(zTx/||z||)]E[flin(x)h2(zTx/||z||)]− 2λβ||z||2E[∆(x)h2(zTx/||z||)] Now h4(zTx/||z||) doesn’t have a gradient in the direction of z so zT∇h4(zTx/||z||) = 0. Similarly zT∇h2(zTx/||z||) = 0. So zT∇(G(z)−Glin(z)) = −4sgn(û4)||z||4E[∆(x)h4(zTx/||z||)] + 4λ||z||4(E[∆(x)h2(zTx/||z||)])2
+ 8λ||z||4E[∆(x)h2(zTx/||z||)]E[flin(x)h2(zTx/||z||)]− 4λβ||z||2E[∆(x)h2(zTx/||z||)] We know that E[flin(x)h2(zTx/||z||)] has a factor of β giving us using Lemma 14:
|zT∇(G(z)−Glin(z))| ≤ O(log d)d−(1+p)η+3||z||4.
So zT∇G(z) is also Ω(||z||4). so ||∇G(z)|| ≥ Ω(1)d−η
Proof of Claim 2. We have G−Glin as follows, G(z)−Glin(z) = −sgn(û4)E[(flin(x) + ∆(x))H4(zTx)] + λ(E[(flin(x) + ∆(x))H2(zTx)]− û2)2
+ sgn(û4)E[(flin(x))H4(zTx)]− λ(E[(flin(x))H2(zTx)]− û2)2
= −sgn(û4)E[∆(x)H4(zTx)] + λ(E[∆(x)H2(zTx)])2
+ 2λE[∆(x)H2(zTx)]E[flin(x)H2(zTx)− û2] Thus we have,
∇(G(z)−Glin(z)) = −sgn(û4)E[∆(x)∇H4(zTx)] + 2λE[∆(x)H2(zTx)]E[∆(x)∇H2(zTx)]
+ 2λE[flin(x)H2(zTx)− û2]E[∆(x)∇H2(zTx)] + 2λE[∆(x)H2(zTx)]E[flin(x)∇H2(zTx)]
Observe that H2 and H4 are degree 2 and 4 (respectively) polynomials thus norm of gradient and hessian of the same can be bounded by at most O(||z||||x||4). Using Lemma 14 we can bound each term by roughly O(log d)d−(1+p)η+3||z||4. Note that λ being large does not hurt as it is scaled appropriately in each term. Subsequently, using Lemma 15, we can show that ||z|| is bounded by a constant since ||G(z)|| ≤ d−2η . Similar analysis holds for the hessian too.
Now applying Theorem 4 gives us that z is an (O(log d)d−(1+p)η+3, O(log d)d−(1+p)η+3)approximate local minima of Glin. This implies that it is also an ( ′ := C log(d)d−(1+2p)η+3, τ ′ := C log(d)d−(1+2p/3)η+3)-approximate local minima of Glin for large enough C > 0 by increasing τ . Observe that √ τ3/|û4| = C3/2 log3/2(d)d−(3/2+p)η+9/2/d−η/2 = C3/2 log3/2(d)d−(1+p)η+9/2 ≥ ′. Now using Claim 1, we get the required result.
B.4 SIMULTANEOUS RECOVERY
Ge et al. (2017) also showed simultaneous recovery by minimizing the following loss function Glin defined below has a well-behaved landscape.
Glin(W) = E flin(x) ∑ j,k∈[d],j 6=k ψ(wj ,wk,x) − γE flin(x) ∑ j∈[d] H4(w T j x) (1)
+ λ ∑ i ( E [ flin(x)H2(w T i x) ] − û2 )2 (2)
where ψ(v, w,x) = H2(vTx)H2(wTx) + 2(vTw)2 + 4(vTx)(wTx)vTw.
They gave the following result.
Theorem 10 (Ge et al. (2017)). Let c be a sufficiently small universal constant (e.g. c = 0.01 suffices), and suppose the activation function u satisfies û4 6= 0. Assume γ ≤ c, λ ≥ Ω(|û4|/û22), and W∗ be the true weight matrix. The function Glin satisfies the following:
1. Any saddle point W has a strictly negative curvature in the sense that λmin(∇2Glin(W)) ≥ −τ0 where τ0 = cmin{γ|û4|/d, λû22}.
2. Suppose W is an ( , τ0)-approximate local minimum, then W can be written as W−T = PDW∗ + E where D is a diagonal matrix with Dii ∈ {±1±O(γ|û4|/λû22)±O( /λ)}, P is a permutation matrix, and the error term ||E|| ≤ O( d/û4).
We show that this minimization is robust. Let us consider the corresponding function G to Glin with the additional non-linear terms as follows:
G(W) = E f(x) ∑ j,k∈[d],j 6=k ψ(wj ,wd,x) − γE f(x) ∑ j∈[d] H4(wj ,x) + λ
∑ i (E [f(x)H2(wi,x)]− û2)2
Now we can show that G and Glin are close as in the one-by-one case.
R(W) := G(W)−Glin(W) = E [∆(x)A(W,x)]− γE [∆(x)B(W,x)] + λ ( E [f(x)C(W,x)]2 − E [flin(x)C(W,x)]2 ) = E [∆(x)A(W,x)]− γE [∆(x)B(W,x)] + λE [(∆(x)C(W,x)(f(x′) + flin(x′))C(W,x′)] = E [∆(x)A(W,x)]− γE [∆(x)B(W,x)] + λE [(∆(x)D(W,x)] = E [∆(x)(A(W,x)− γB(W,x) + λD(W,x))] = E [∆(x)L(W,x)]
where A(W,x) = ∑ j,k∈[d],j 6=k ψ(wj ,wd,x), B(W,x) = ∑ j∈[d]H4(wj ,x), C(W,x) =∑
iH2(wi,x), D(W,x) = C(W,x)E[(f(x′)+flin(x′))C(W,x′)] and L(W,x) = A(W,x)− γB(W,x) + λD(W,x).
Using similar analysis as the one-by-one case, we can show the required closeness. It is easy to see that ||∇L|| and ||∇2L|| will be bounded above by a constant degree polynomial in O(log d)d−(1+p)η+3 max ||wi||4. No row can have large weight as if any row is large, then looking at the gradient for that row, it reduces to the one-by-one case, and there it can not be larger than a constant. Thus we have the same closeness as in the one-by-one case. Combining this with Theorem 10 and 4, we have the following theorem:
Theorem 11. Let c be a sufficiently small universal constant (e.g. c = 0.01 suffices), and under Assumptions 1, 2 and 3. Assume γ ≤ c, λ = Θ(dη), and W∗ be the true weight matrix. The function G satisfies the following
1. Any saddle point W has a strictly negative curvature in the sense that λmin(∇2Glin(W)) ≥ −τ where τ0 = O(log d)d−Ω(1).
2. Suppose W is a (d−Ω(1), d−Ω(1))-approximate local minimum, then W can be written as W−T = PDW∗ + E where D is a diagonal matrix with Dii ∈ {±1±O(γ)± d−Ω(1))}, P is a permutation matrix, and the error term ||E|| ≤ O(log d)d−Ω(1).
Using standard optimization techniques we can find a local minima.
B.5 APPROXIMATE TO ARBITRARY CLOSE
Lemma 16. If u is the sign function then E[u(wTx)δ′(zTx)] = c| cot(α)| where w, z are unit vectors and α is the angle between them and c is some constant.
Proof. WLOG we can work the in the plane spanned by z and w and assume that z is the vector i along and w = i cosα + j sinα. Thus we can replace the vector x by ix + jy where x, y are normally distributed scalars. Also note that u′ = δ (Dirac delta function).
E[u(wTx)δ′(zTx)] = E[u(x cosα+ y sinα)δ′(x)]
= ∫ y ∫ x u(x cosα+ y sinα)δ′(x)φ(x)φ(y)dxdy
Using the fact that ∫ x δ′(x)h(x)dx = h′(0) this becomes
= ∫ y φ(y)[(∂/∂x)u(x cosα+ y sinα)φ(x)]x=0dy
= ∫ y φ(y)[n(x)u′(x cosα+ y sinα) cosα+ φ′(x)u(x cosα+ y sinα)]x=0dy
= ∫ ∞ y=−∞ φ(y)φ(0)δ(y sinα) cosαdy
Substituting s = y sinα this becomes
= ∫ ∞/ sinα s=−∞/ sinα φ(s/ sinα)φ(0)δ(s) cosα(1/ sinα)ds
=sgn(sinα) cot(α)φ(0) ∫ s φ(s/ sinα)δ(s)ds =| cot(α)|φ(0)φ(0)
Proof of Lemma 4. Let us compute the probability of lying in the -band for any t:
Pr[x ∈ l(z, t, )] = Pr[t− ≤ zTx ≤ t] = Pr g∈N(0,||z||2) [t− ≤ g ≤ t]
= 1√
2π||z|| ∫ t g=t− e − g 2 2||z||2 dg = √ 2π||z|| e − t̄2 2||z||2
where the last equality follows from the mean-value theorem for some t̄ ∈ [t− , t]. Next we compute the following:
Pr[xTw∗1 ≥ t and x ∈ l(z, t′, )]
= 1
(2π) n 2 ∫ x sgn(x1 − t)1[x ∈ l(z, t′, )]e− ||x||2 2 dx
= 1
(2π) 1 2 ∫ ∞ x1=t e− x21 2 ( 1 (2π) n−1 2 ∫ x−1 1[x−1 ∈ l(z−1, t′ − z1x1, )]e− ||x−1|| 2 2 dx−1 ) dx1
= 1
(2π) 1 2 ∫ ∞ x1=t e− x21 2 Pr[x−1 ∈ l(z−1, t− z1x1, )]dx−1
=
2π||z−1|| ∫ t′ g=t′− ∫ ∞ x1=t e− x21 2 e − (g−z1x1) 2 2||z−1||2 dx1dg
= 1
2π||z−1|| ∫ t′ g=t′− e − g 2 2||z||2 ∫ ∞ x1=t e −
( x1−
gz1 ||z||2 )2 2 ||z−1||2
||z||2 dx1dg
= 1√
2π||z|| ∫ t′ g=t′− e − g 2 2||z||2 Φc ( t||z||2 − gz1) ||z−1| |||z|| ) dg
= √ 2π e−
t∗2 2 Φc ( t− t ∗ cos(α1) | sin(α1)| ) where the last equality follows from the mean-value theorem for some t∗ ∈ [t′ − , t′]. Combining, we get:
Pr[xTw1∗ ≥ t and x ∈ l(z, t′, )|x ∈ l(z, t, )]
= e− t∗2−t̄2 2 Φc ( t− t∗ cos(α1) | sin(α1)| ) = Φc ( t− t∗ cos(α1) | sin(α1)| ) ±O( )t′
for ≤ 1/t′.
Proof of Lemma 5. Recall that P is monotone with positive linear term, thus for high threshold u (0 unless input exceeds t and positive after) we have sgn(f(x)) = ∨sgn(xTw∗i − t). This is because, for any i, P applied to Xi > 0 and ∀j 6= i,Xj = 0 gives us ci which is positive. Also, P (0) = 0. Thus, sgn(P ) is 1 if any of the inputs are positive. Using this, we have,
Pr[sgn(f(x))|x ∈ l(z, t′, )] ≥ Pr[sgn((w∗1)Tx− t)|x ∈ l(z, t′, )] Also,
Pr[sgn(f(x))|x ∈ l(z, t′, )] ≤ ∑
Pr[sgn(xTw∗i − t)|x ∈ l(z, t′, )]
= Pr[sgn((w∗1) Tx− t)|x ∈ l(z, t′, )] + ∑ i 6=1 Pr[sgn(xTw∗i − t)|x ∈ l(z, t′, )]
≤ Pr[sgn((w∗1)Tx− t)|x ∈ l(z, t, )] + η where ∑ i 6=1 Pr[sgn(x
Tw∗i − t)|x ∈ l(z, t′, )] ≤ η. We will show that η is not large since a z is close to one of the vectors, it can not be close to the others thus αi will be large for all i 6= j. Let us bound η,∑
i6=1 Pr[sgn(xTw∗i − t)|x ∈ l(z, t′, )] ≤ ∑ i 6=1 ( Φc ( t− t∗i cos(αi) | sin(αi)| ) +O( )t′i )
≤ ∑ i6=1 ( Φc ( t− t∗i cos(αi) | sin(αi)| ) +O( )t′ )
≤ ∑ i6=1 ( Φc ( t− t′ cos(αi) | sin(αi)| ) +O( )t′ ) ≤ ∑ i6=1 1√ 2πγi e− γ2i 2 +O( )kt′
where γi = t−t′ cos(αi) | sin(αi)| . The above follows since γi ≥ 0 by assumption on t ′. Under the assumption, let β = maxi 6=1 cos(αi) we have
γi ≥ t ( 1− βcos(α1) )
√ 1− β2
= Ω(t)
under our setting. Thus we have,∑ i 6=1 Pr[sgn(xTw∗i − t))|x ∈ l(z, t′, )] ≤ de−Ω(t 2) +O( )dt = de−Ω(t 2)
for small enough . | 1. What is the focus of the paper regarding deep neural networks?
2. What are the three main assumptions made in the paper for recovering the lowest layer of a deep neural network?
3. Can you provide an overview of the proposed algorithm's two steps?
4. What are your concerns about the paper's writing style?
5. How do you assess the novelty and significance of the paper's contributions to theoretical machine learning? | Review | Review
This paper considers the problem of recovering the lowest layer of a deep neural network whose architecture is ReLU or sign function followed by a polynomial. This paper relies on three assumptions: 1) the lowest layer has a high threshold (\Omleg(\sqrt{d})), 2) the polynomial has 1/poly(d) lower bouned and O(1) upper bounded linear terms and is monotone 3) the input is Gaussian. Under these assumptions, this paper shows it is possible to learn the lowest layer in precision \eps in poly(1/eps, d) time.
The proposed algorithm has two steps. The first step is based on the landscape design approach proposed by Ge et al. (2017) and the second step is based on checking the correlation.
Provably learning a neural network is a major problem in theoretical machine learning. The assumptions made in this paper are fine for me and I think this paper indeed has some new interesting observation. My major concern is the writing. There are several components of the algorithm. However, it is hard to digest the intuition behind each component and how the assumptions are used. I suggest authors providing a high-level and non-technical description of the whole algorithm at the beginning. If authors can significantly improve the writing, I am happy to re-evaluate my comments and increase my rating. |
ICLR | Title
Recovering the Lowest Layer of Deep Networks with High Threshold Activations
Abstract
Giving provable guarantees for learning neural networks is a core challenge of machine learning theory. Most prior work gives parameter recovery guarantees for one hidden layer networks, however, the networks used in practice have multiple non-linear layers. In this work, we show how we can strengthen such results to deeper networks – we address the problem of uncovering the lowest layer in a deep neural network under the assumption that the lowest layer uses a high threshold before applying the activation, the upper network can be modeled as a well-behaved polynomial and the input distribution is gaussian.
1 INTRODUCTION
Understanding the landscape of learning neural networks has been a major challege in machine learning. Various works gives parameter recovery guarantees for simple one-hidden-layer networks where the hidden layer applies a non-linear activation u after transforming the input x by a matrix W, and the upper layer is the weighted sum operator: thus f(x) = ∑ aiu(w T i x). However, the networks used in practice have multiple non-linear layers and it is not clear how to extend these known techniques to deeper networks.
We consider a multilayer neural network with the first layer activation u and the layers above represented by an unknown polynomial P such that it has non-zero non-linear components. More precisely, the function f computed by the neural network is as follows:
fW(x) = P (u(w T 1 x), u(w T 2 x), . . . , u(w T d x)) for P (X1, . . . , Xd) = ∑ r∈Zd+ cr · ∏ j X rj j .
We assume that the input x is generated from the standard Gaussian distribution and there is an underlying true network (parameterized by some unknown W∗)1 from which the labels are generated.
In this work we strengthen previous results for one hidden layer networks to a larger class of functions representing the transform made by the upper layer functions if the lowest layer uses a high threshold (high bias term) before applying the activation: u(a − t) instead of u(a). Intuitively, a high threshold is looking for a high correlation of the input a with a direction w∗i . Thus even if the function f is applying a complex transform after the first layer, the identity of these high threshold directions may be preserved in the training data generated using f .
Learning with linear terms in P . Suppose P has a linear component then we show that increasing the threshold t in the lowest layer is equivalent to amplifying the coefficients of the linear part. Instead of dealing with the polynomial P it turns out that we can roughly think of it as P (µX1, ..., µXd) where µ decreases exponentially in t (µ ≈ e−t 2
). As µ decreases it has the effect of diminishing the non-linear terms more strongly so that relatively the linear terms stand out. Taking advantage of this effect we manage to show that if t exceeds a certain threshold the non linear terms drop in value enough so that the directions wi can be learned by relatively simple methods. We show that we can get close to the wi applying a simple variant of PCA. While an application of PCA can be thought of as finding principal directions as the local maxima of max||z||=1 E[f(x)(zTx)2],
1We suppress W when it is clear from context.
we instead perform maxE[f(x)H2(zTx)2]=1 E[f(x)H4(zTx)4]]2. If W∗ has a constant condition number then the local maxima can be used to recover directions that are transforms of wi. Theorem 1 (informal version of Claim 2, Theorem 11). If t > c √ log d for large enough constant c > 0 and P has linear terms with absolute value of coefficients at least 1/poly(d) and all coefficients at most O(1), we can recover the weight vector wi within error 1/poly(d) in time poly(d).
These approximations of wi obtained collectively can be further refined by looking at directions along which there is a high gradient in f ; for monotone functions we show how in this way we can recover wi exactly (or within any desired precision. Theorem 2. (informal version of Theorem 5) Under the conditions of the previous theorem, for monotone P , there exists a procedure to refine the angle to precision in time poly(1/ , d) starting from an estimate that is 1/poly(d) close.
The above mentioned theorems hold for u being sign and ReLU.3
When P is monotone and u is the sign function, learning W is equivalent to learning a union of half spaces. We learn W∗ by learning sign of P which is exactly the union of halfspaces wTi x = t. Thus our algorithm can also be viewed as a polynomial time algorithm for learning a union of large number of half spaces that are far from the origin – to our knowledge this is the first polynomial time algorithm for this problem but with this extra requirement (see earlier work Vempala (2010) for an exponential time algorithm). Refer to Appendix B.6 for more details.
Such linear components in P may easily be present: consider for example the case where P (X) = u(vTX − b) where u is say the sigmoid or the logloss function. The taylor series of such functions has a linear component – note that since the linear term in the taylor expansion of u(x) has coefficient u′(0), for expansion of u(x−b) it will be u′(−b) which is Θ(e−b) in the case of sigmoid. In fact one may even have a tower (deep network) or such sigmoid/logloss layers and the linear components will still be present – unless they are made to cancel out precisely; however, the coefficients will drop exponentially in the depth of the networks and the threshold b.
Sample complexity with low thresholds and no explicit linear terms. Even if the threshold is not large or P is not monotone, we show that W∗ can be learned with a polynomial sample complexity (although possibly exponential time complexity) by finding directions that maximize the gradient of f . Theorem 3 (informal version of Corollary 1). If u is the sign function and wi’s are orthogonal then in poly(1/ , d) samples one can determine W∗ within precision if the coefficient of the linear terms in P (µ(X1 + 1), µ(X2 + 1), µ(X3 + 1), . . .) is least 1/poly(d)
Learning without explicit linear terms. We further provide evidence that P may not even need to have the linear terms – under some restricted cases (section 4), we show how such linear terms may implicitly arise even though they may be entirely apparently absent. For instance consider the case when P = ∑ XiXj that does not have any linear terms. Under certain additional assumptions we show that one can recover wi as long as the polynomial P (µ(X1 +1), µ(X2 +1), µ(X3 +1), ..) (where µ is e−t has linear terms components larger than the coefficients of the other terms). Note that this transform when applied to P automatically introduces linear terms. Note that as the threshold increases applying this transform on P has the effect of gathering linear components from all the different monomials in P and penalizing the higher degree monomials. We show that if W∗ is a sparse binary matrix then we can recover W∗ when activation u(a) = eρa under certain assumptions about the structure of P . When we assume the coefficients are positive then these results extend for binary low l1- norm vectors without any threshold. Lastly, we show that for even activations (∀a, u(a) = u(−a)) under orthogonal weights, we can recover the weights with no threshold.
Learning with high thresholds at deeper layers. We also point out how such high threshold layers could potentially facilitate learning at any depth, not just at the lowest layer. If there is any cut in the network that takes inputs X1, . . . , Xd and if the upper layers operations can be modelled by a polynomial P , then assuming the inputs Xi have some degree of independence we could use this to modularly learn the lower and upper parts of the network separately (Appendix E)
2Here H4 and H2 are the fourth and second order hermite polynomials respectively. 3Theorem 1 holds for sigmoid with t ≥ c log d.
Related Work. Various works have attempted to understand the learnability of simple neural networks. Despite known hardness results Goel et al. (2016); Brutzkus & Globerson (2017), there has been an array of positive results under various distributional assumptions on the input and the underlying noise in the label. Most of these works have focused on analyzing one hidden layer neural networks. A line of research has focused on understanding the dynamics of gradient descent on these networks for recovering the underlying parameters under gaussian input distribution Du et al. (2017b;a); Li & Yuan (2017); Zhong et al. (2017a); Zhang et al. (2017); Zhong et al. (2017b). Another line of research borrows ideas from kernel methods and polynomial approximations to approximate the neural network by a linear function in a high dimensional space and subsequently learning the same Zhang et al. (2015); Goel et al. (2016); Goel & Klivans (2017b;a). Tensor decomposition methods Anandkumar & Ge (2016); Janzamin et al. (2015) have also been applied to learning these simple architectures.
The complexity of recovering arises from the highly non-convex nature of the loss function to be optimized. The main result we extend in this work is by Ge et al. (2017). They learn the neural network by designing a loss function that allows a ”well-behaved” landscape for optimization avoiding the complexity. However, much like most other results, it is unclear how to extend to deeper networks. The only known result for networks with more than one hidden layer is by Goel & Klivans (2017b). Combining kernel methods with isotonic regression, they show that they can provably learn networks with sigmoids in the first hidden layer and a single unit in the second hidden layer in polynomial time. We however model the above layer as a multivariate polynomial allowing for larger representation. Another work Arora et al. (2014) deals with learning a deep generative network when several random examples are generated in an unsupervised setting. By looking at correlations between input coordinates they are able to recover the network layer by layer. We use some of their ideas in section 4 when W is a sparse binary matrix.
Notation. We denote vectors and matrices in bold face. || · ||p denotes the lp-norm of a vector. || · || without subscript implies the l2-norm. For matrices || · || denotes the spectral norm and || · ||F denotes the forbenius norm. N (0,Σ) denotes the multivariate gausssian distribution with mean 0 and covariance Σ. For a scalar x we will use φ(x) to denote the p.d.f. of the univariate standard normal distribution with mean zero and variance 1 .For a vector x we will use φ(x) to denote the p.d.f. of the multivariate standard normal distribution with mean zero and variance 1 in each direction. Φ denotes the c.d.f. of the standard gausssian distribution. Also define Φc = 1 − Φ. Let hi denote the ith normalized Hermite polynomial Wikipedia contributors (2018). For a function f , let f̂i denote the ith coefficient in the hermite expansion of f , that is, f̂i = Eg∼N (0,1)[f(g)hi(g)]. For a given function f computed by the neural network, we assume that the training samples (x, y) are such that x ∈ Rn is distributed according to N (0, 1) and label has no noise, that is, y = f(x). Note: Most proofs are deferred to the Appendix due to lack of space.
2 APPROXIMATE RECOVERY WITH LINEAR TERM
In this section we consider the case when P has a positive linear component and we wish to recover the parameters of true parameters W∗. The algorithm has two-steps: 1) uses existing one-hidden layer learning algorithm (SGD on carefully designed loss Ge et al. (2017)) to recover an approximate solution , 2) refine the approximate solution by performing local search (for monotone P ). The intuition behind the first step is that high thresholds enable P to in expectation be approximately close to a one-hidden-layer network which allows us to transfer algorithms with approximate guarantees. Secondly, with the approximate solutions as starting points, we can evaluate the closeness of the estimate of each weight vector to the true weight vector using simple correlations. The intuition of this step is to correlate with a function that is large only in the direction of the true weight vectors. This equips us with a way to design a local search based algorithm to refine the estimate to small error.
For simplicity in this section we will work with P where the highest degree in any Xi is 1. The degree of the overall polynomial can still be n. See Appendix B.8 for the extension to general P . More formally,
Assumption 1 (Structure of network). We assume that P has the following structure P (X1, . . . , Xk) = c0 + ∑ i∈[d] ciXi + ∑ S⊆[d]:|S|>1 cS ∏ j∈S Xj such that ci = Θ(1)
4 for all i ∈ [d] and for all S ⊆ [d] such that |S| > 1, |cS | ≤ O(1). W∗ has constant condition number.
Thus f(x) = c0 + ∑ i∈[d] ciu((w ∗ i ) Tx) + ∑ S⊆[d]:|S|>1 cS ∏ j∈S u((w ∗ j ) Tx). Denote flin(x) =
c0 + ∑ i∈[d] ciu((w ∗ i ) Tx) to be the linear part of f .
Next we will upper bound expected value of u(x): for ”high-threshold” ReLU, that is, ut(a) = max(0, a − t), Eg∼N(0,σ2)[ut(g)] is bounded by a function ρ(t, σ) ≈ e− t2
2σ2 (see Lemma 10). We also get a lower bound on |û4| in terms of ρ(t, σ) 5 This enables us to make the following assumption. Assumption 2. Activation function u is a positive high threshold activation with threshold t, that is, the bias term is t. Eg∼N(0,σ2)[ut(g)] ≤ ρ(t, σ) where ρ is a positive decreasing function of t. Also, |ûk| = tΘ(1)ρ(t, 1) for k = 2, 4. Assumption 3 (Value of t). t is large enough such that ρ(t, ||W∗||) ≈ d−η and ρ(t, 1) ≈ d−pη with for large enough constant η > 0 and p ∈ (0, 1].
For example, for high threshold ReLU, ρ(t, 1) = e−t 2/2 and µ = ρ(t, ||W∗||) = e−t2/2||W∗||2 , thus t = √ 2η log d for large enough d suffices to get the above assumption (κ(W∗) is a constant).
These high-threshold activation are useful for learning as in expectation, they ensure that f is close to flin since the product terms have low expected value. This is made clear by the following lemmas: Lemma 1. For |S| > 1, under Assumption 2 we have,
E ∏ j∈S ut((w ∗ j ) Tx) ≤ ρ(t, 1) (κ(W∗)ρ(t, ||W∗||))|S|−1 . So if µ := κ(W∗)ρ(t, ||W∗||), then E[ ∏ j∈S Xj [x]] ≤ ρ(t, 1)µ|S|−1
Lemma 2. Let ∆(x) = f(x) − flin(x). Under Assumptions 1, 2 and 3, if t is such that dρ(t, ||W∗||) ≤ c for some small enough constant c > 0 we have,
E[|∆(x)|] ≤ O ( d3ρ(t, 1)ρ(t, ||W∗||) ) = O ( d−(1+p)η+3 ) .
Note: We should point out that f(x) and flin(x) are very different point wise; they are just close in expectation under the distribution of x. In fact, if d is some constant then even the difference in expectation is some small constant.
This closeness suggests that algorithms for recovering under the labels from flin can be used to recover with labels from f approximately.
Learning One Layer Neural Networks using Landscape Design. Ge et al. (2017) proposed an algorithm for learning one-hidden-layer networks. Intuitively, the approach of Ge et al. (2017) is to design a well behaved loss function based on correlations to recover the underlying weight vectors. They show that the local minima of the following optimization corresponds to some transform of each of the w∗i – thus it can be used to recover a transform of w ∗ i , one at a time.
max z:E[flin(x)H2(zTx)]=û2
sgn(û4)E[flin(x)H4(zTx)]
which they optimize using the Lagrangian formulation (viewed as a minimization):
min z
Glin(z) := −sgn(û4)E[flin(x)H4(zTx)] + λ(E[flin(x)H2(zTx)]− û2)2
where H2(zTx) = ||z||2h2 ( zTx ||z|| ) = (z Tx)2√ 2 − ||z|| 2 √ 2 and H4(zTx) = ||z||4h4 ( zTx ||z|| ) = √ 6 (z Tx)4
12 − ||z||2(zTx)2 2 + ||z||4 4 (see Appendix A.1 for more details). Using properties
4We can handle ∈ [d−C , dC ] for some constant C by changing the scaling on t. 5For similar bounds for sigmoid and sign refer to Appendix B.7.
of Hermite polynomials, we have E[flin(x)H2(zTx)] = û2 ∑ i ci(z Tw∗i ) 2 and similarly
E[flin(x)H4(zTx)] = û4 ∑ i(z Tw∗i ) 4. Thus
Glin(z) = −|û4| ∑ i ci(z Tw∗i ) 4 + λû22 (∑ i ci(z Tw∗i ) 2 − 1 )2 .
Using results from Ge et al. (2017), it can be shown that the approximate local minima of this problem are close to columns of (TW∗)−1 where T is a diagonal matrix with Tii = √ ci.
Definition 1 (( , τ)-local minimum/maximum). z is an ( , τ)-local minimum of F if ||∇F (z)|| ≤ and λmin(∇2F (z)) ≤ τ . Claim 1 (Ge et al. (2017)). An ( , τ)-local minima of the Lagrangian formulation z with ≤ O (√ τ3/|û4| )
is such that for an index i |zTwi| = 1 ± O( /λû22) ± O(dτ/|û4|) and ∀j 6= i, |vTwj | = O( √ τ/|û4|) where wi are columns of (TW∗)−1.
Ge et al. (2017) do not mention û2 but it is necessary in the non-orthogonal weight vectors case for the correct reduction. Since for us, this value can be small, we mention the dependence.Note that these are not exactly the directions w∗i that we need, one way to think about is that we can get the correct directions by estimating all columns and then inverting.
One-hidden-layer to Deep Neural Network. Consider the loss with f instead of flin:
min z : G(z) = −sgn(û4)E[f(x)H4(zTx)] + λ(E[f(x)H2(zTx)]− û2)2
We previously showed that f is close to flin in expectation due to the high threshold property. This also implies that Glin and G are close and so are the gradients and (eignevalues of) hessians of the same. This closeness implies that the landscape properties of one approximately transfers to the other function. More formally, Theorem 4. Let Z be an ( , τ)-local minimum of functionA. If ||∇(B−A)(Z)|| ≤ ρ and ||∇2(B− A)(Z)|| ≤ γ then Z is an ( + ρ, τ + γ)-local minimum of function B and vice-versa.
We will now apply above lemma on our Glin(z) and G(z). Claim 2. For λ = Θ(|û4|/û22) ≈ dη , an ( , τ)-approximate local minima of G (for small enough , τ ≤ d−2η) is an (O(log d)d−(1+p)η+3, O(log d)d−(1+p)η+3)-approximate local minima of Glin. This implies z is such that for an index i, |zTwi| = 1 ± O(1)d−2/3pη+3 and ∀j 6= i, |zTwj | = O(1)d−1/3pη+3/2 where wi are columns of (TW∗)−1 (ignoring log d factors). Note: For ReLU, setting t = √ C log d for large enough C > 0 we can get closeness 1/poly(d) to the columns of (TW∗)−1. Refer Appendix B.7 for details for sigmoid.
The paper Ge et al. (2017) also provides an alternate optimization that when minimized simultaneously recovers the entire matrix W∗ instead of having to learn columns of (TW∗)−1 separately. We show how applying our methods can also be applied to that optimization in Appendix B.4 to recover W∗ by optimizing a single objective.
2.1 APPROXIMATE TO ARBITRARILY CLOSE FOR MONOTONE P
Assuming P is monotone, we can show that the approximate solution from the previous analysis can be refined to arbitrarily closeness using a random search method followed by approximately finding the angle of our current estimate to the true direction.
The idea at a high level is to correlate with δ′(zTx − t) where δ is the Dirac delta function. It turns out that the correlation is maximized when z is equal to one of the wi. Correlation with δ′(zTx−t) is checking how fast the correlation of f with δ(zTx−t) is changing as you change t. To understand this look at the case when our activation u is the sign function then note that correlation of ut(wTx− t) with δ′(wTx− t) is very high as its correlation with δ(wTx− t′) is 0 when t′ < t and significant when t′ > t. So as we change t’ slightly from t− to t+ there is a sudden increase. If z and w differ then it can be shown that correlation of ut(wTx− t) with δ′(zTx− t) essentially depends on cot(α) where α is the angle between w and z (for a quick intuition note that one can
prove that E[ut(wTx)δ′(zTx)] = c cot(α). See Lemma 16 in Appendix). In the next section we will show how the same ideas work for non-monotone P even if it may not have any linear terms but we only manage to prove polynomial sample complexity for finding w instead of polynomial time complexity.
In this section we will not correlate exactly with δ′(zTx− t) but instead we will use this high level idea to estimate how fast the correlation with δ(zTx − t′) changes between two specific values as one changes t′, to get an estimate for cot(α). Secondly since we can’t to a smooth optimization over z, we will do a local search by using a random perturbation and iteratively check if the correlation has increased. We can assume that the polynomial P doesn’t have a constant term c0 as otherwise it can easily be determined and cancelled out6.
We will refine the weights one by one. WLOG, let us assume that w∗1 = e1 and we have z such that zTw∗1 = z1 = cos −1(α1). Let l(z, t, ) denote {x : zTx ∈ [t− , t]} for z ∈ Sn−1.
Algorithm 1 RefineEstimate 1: Run EstimateTanAlpha on z to get s = tan(α) where α is the angle between z and w∗1 . 2: Perturb current estimate z by a vector along the d− 1 dimensional hyperplane normal to z with
the distribution n(0,Θ(α/d))d−1 to get z′. 3: Run EstimateTanAlpha on z′ to get s′ = tan(α′) where α′ is the angle between z′ and w∗1 . 4: if α′ ≤ O(α/d) then 5: z ← z′ 6: Repeat till α′ ≤ .
Algorithm 2 EstimateTanAlpha 1: Find t1 and t2 such that Pr[sgn(f(x))|x ∈ l(z, t′, )] at t1 is 0.4 and at t2 is 0.6. 2: Return t2−t1Φ−1(0.6)−Φ−1(0.4) .
The algorithm (Algorithm 1) estimates the angle of the current estimate with the true vector and then subsequently perturbs the vector to get closer after each successful iteration.
Theorem 5. Given a vector z ∈ Sd−1 such that it is 1/poly(d)-close to the underlying true vector w∗1 , that is cos
−1(zTw∗1) ≤ 1/poly(d), running RefineEstimate for O(T ) iterations outputs a vector z∗ ∈ Sd−1 such that cos−1((z∗)Tw∗1) ≤ ( 1− cd )T γ for some constant c > 0. Thus after O(d log(1/ )) iterations cos−1((z∗)Tw∗1) ≤ .
We prove the correctness of the algorithm by first showing that EstimateTanAlpha gives a multiplicative approximation to tan(α). The following lemma captures this property.
Lemma 3. EstimateTanAlpha(z) outputs y such that y = (1 ± O(η)) tan(α) where α is the angle between z and w∗1 .
Proof. We first show that the given probability when computed with sgn(xTw∗1−t) is a well defined function of the angle between the current estimate and the true parameter up to multiplicative error. Subsequently we show that the computed probability is close to the one we can estimate using f(x) since the current estimate is close to one direction. The following two lemmas capture these properties.
Lemma 4. For t, t′ and ≤ 1/t′, we have Pr[xTw∗1 ≥ t and x ∈ l(z, t′, )|x ∈ l(z, t, )] = Φc ( t− t∗ cos(α1) | sin(α1)| ) ±O( )t′
Lemma 5. For t′ ∈ [0, t/ cos(α1)], we have
Pr[sgn(f(x))|x ∈ l(z, t′, )] = Pr[sgn((w∗1)Tx− t)|x ∈ l(z, t, )] + de−Ω(t 2).
6for example with RELU activation, f will be c0 most of the time as other terms in P will never activate. So c0 can be set to say the median value of f .
Using the above, we can show that, t2 − t1 = ( Φ−1(0.6− η1 ±O( )t1)− Φ−1(0.4− η2 ±O( )t2) ) tan(α)
= ( Φ−1(0.6)− Φ−1(0.4)− (η1 ±O( )t1)(Φ−1)′(p1) + (η2 ±O( )t2)(Φ−1)′(p2) ) tan(α)
where η1, η2 > 0 are the noise due to estimating using f and p1 ∈ [0.6 − η1 ± O( )t1, 0.6] and p2 ∈ [0.4 − η2 ± O( )t2, 0.4] as long as t1, t2 ∈ [0, t/ cos(α1)]. The following lemma bounds the range of t1 and t2.
Lemma 6. We have 0 ≤ t1 ≤ t2 ≤ tcos(α1) .
Thus, we have, t2 − t1
Φ−1(0.6)− Φ−1(0.4) = (1±O (η1 + η2 + t2)) tan(α)
as long as η2+O( )t2 ≤ c for some constant c > 0. Thus, we can get a multiplicative approximation to tan(α) up to error η ( can be chosen to make its contribution smaller than η).
Finally we show (proof in Appendix ??) that with constant probability, a random perturbation reduces the angle by a factor of (1 − 1/d) of the current estimate hence the algorithm will halt after O(d log(1/ν)) iterations.
Lemma 7. By applying a random Gaussian perturbation along the d − 1 dimensional hyperplane normal to z with the distribution n(0,Θ(α/d))d−1 and scaling back to the unit sphere, with constant probability, the angle α (< π/2) with the fixed vector decreases by at least Ω(α/d).
3 SAMPLE COMPLEXITY
We extend the methods of the previous section to a broader class of polynomials but only to obtain results in terms of sample complexity. The main idea as in the previous section is to correlate with δ′(zTx−t) (the derivative of the dirac delta function) and find arg max||z||2=1 E[f(x)δ
′(zTx−t)]. We will show that the correlation goes to infinity when z is one of w∗i and bounded if it is far from all of them. From a practical standpoint we calculate δ′(zTx − s) by measuring correlation with 1 2 (δ(z
Tx− s+ )− δ(zTx− s− ). In the limit as → 0 this becomes δ′(zTx− s). δ(zTx− s) in turn is estimated using 1 (sgn(z
Tx− s+ )− sgn(zTx− s)), as in the previous section, for an even smaller ; however, for ease of exposition, in this section, we will assume that correlations with δ(zTx− s) can be measured exactly. Let us recall that f(x) = P (u((w∗1) Tx), u((w∗2) Tx), . . . , u((w∗d)
Tx)). Let C1(f, z, s) denote E[f(x)δ(zTx− s)] and let C2(f, z, s) denote E[f(x)(δ(zTx− s− )− δ(zTx− s+ )].
If u = sgn then P has degree at most 1 in each Xi. Let ∂P∂Xi denote the symbolic partial derivative of P with respect to Xi; so, it drops monomials without Xi and factors off Xi from the remaining ones. Let us separate dependence on Xi in P as follows:
P (X1, , .., Xd) = XiQi(X1, ..Xi−1, Xi+1, .., Xd) +R1(X1, .Xi−1, Xi+1, .., Xd)
then ∂P∂Xi = Qi.
We will overload the polynomial P such that P [x] to denote the polynomial computed by substituting Xi = u((w∗1)
Tx) and similarly for Q and R. Under this notation f(x) = P [x]. We will also assume that |P (X)| ≤ ||X||O(1) = ||X||c1 (say). By using simple correlations we will show: Theorem 6. If u is the sgn function, P (X) ≤ ||X||c1 and for all i, E[Qi[x]|(w∗i )Tx = t] ≥ 3 then using poly( d 3 2 ) samples one can determine the w ∗ i ’s within error 2. 7
Note that if all the w∗i ’s are orthogonal then Xi are independent and E [ Qi[x] ∣∣(w∗i )Tx = t] is just value ofQi evaluated by settingXi = 1 and setting all the the remainingXj = µwhere µ = E[Xj ]. This is same as 1/µ times the coefficient of Xi in P (µ(X1 + 1), . . . , µ(Xd + 1)).
7The theorem can be extended to ReLU by correlating with the second derivative δ′′ (see Appendix C.1).
Corollary 1. If u is the sgn function and w∗i s are orthogonal then in sample complexity poly( d 3 2
) one can determine W∗ within error 2 in each entry, if the coefficient of the linear terms in P (µ(X1 + 1), µ(X2 + 1), µ(X3 + 1), ..) is larger than 3µ, where µ = E[Xi].
The main point behind the proof of Theorem 6 is that the correlation is high when z is along one of w∗i and negligible if it is not close to any of them.
Lemma 8. Assuming P (X) < ||X||c1 . If z = w∗i then C2(f, z, t) = φ(t)E [ ∂P ∂Xi ∣∣∣zTx = t] + dO(1). Otherwise if all angles αi between z and w∗i are at least 2 it is at most d O(1)/ 2.
We will use the notation g(x)x=s to denote g(x) evaluated at x = s. Thus Cauchy’s mean value theorem can be stated as g(x + ) − g(x) = [g′(s)](s = s′ ∈ [x, x + ]). We will over load the notation a bit: φ(zTx = s) will denote the probability density that vzTx = s; so if z is a unit vector this is just φ(s); φ(zT1 x = s1, z T 2 x = s2) denotes the probability density that both zT1 x = s1, z T 2 x = s2; so again if z1, z2 are orthonormal then this is just φ(s1)φ(s2).
The following claim interprets correlation with δ(zTx − s) as the expected value along the corresponding plane zTx = s. Claim 3. E[f(x)δ(zTx− s)] = E[f(x)|zTx = s]φ(zTx = s).
The following claim computes the correlation of P with δ′(zTx− s). Claim 4. E[P [x]δ′(zTx = s)] is equal to ∑ i | cot(αi)|φ(zTx = s, (w∗i )Tx = t)
E [ ∂P ∂Xi [x]|zTx = s, (w∗i )Tx = t ] + φ′(s)E[P [x]|zTx = s].
We use this to show that the correlation is bounded if all the angles are lower bounded. Claim 5. If P (X) ≤ ||X||c1 and if z has an angle of at least 2 with all the w∗i ’s then C2(f, z, s) ≤ dO(1)/ 2.
Above claims can be used to prove main Lemma 8. Refer to the Appendix C for proofs.
Proof of Theorem 6. If we wish to determine w∗i within an angle of accuracy 2 let us set to be O( 3 2φ(t)d
−c). From Lemma 8, for some large enough c, this will ensure that if all αi > 2 the correlation is o(φ(t) 3). Otherwise it is φ(t) 3(1±o(1)). Since φ(t) = poly(1/d), given poly( d 2 3 ) samples, we can test if a given direction is within accuracy 2 of a w∗i or not.
4 STRONGER RESULTS UNDER STRUCTURAL ASSUMPTIONS
Under additional structural assumptions on W∗ such as the weights being binary, that is, in {0, 1}, sparsity or certain restrictions on activation functions, we can give stronger recovery guarantees. Proofs have been deferred to Appendix D.
Theorem 7. For activation ut(a) = eρ(a−t). Let the weight vectors w∗i be 0, 1 vectors that select the coordinates of x. For each i, there are exactly d indices j such that wij = 1 and the coefficient of the linear terms in P (µ(X1 + 1), µ(X2 + 1), µ(X3 + 1), ..) for µ = e−ρt is larger than the coefficient of all the product terms (constant factor gap) then we can learn the W∗.
In order to prove the above, we will construct a correlation graph over x1, . . . , xn and subsequently identify cliques in the graph to recover w∗i ’s.
With no threshold, recovery is still possible for disjoint, low l1-norm vector. The proof uses simple correlations and shows that the optimization landscape for maximizing these correlations has local maximas being w∗i ’s. Theorem 8. For activation u(a) = ea. If all w∗i ∈ {0, 1}n are disjoint, then we can learn w∗i as long as P has all positive coefficients and product terms have degree at most 1 in each variable.
For even activations, it is possible to recover the weight vectors even when the threshold is 0. The technique used is the PCA like optimization using hermite polynomials as in Section 2. Denote C(S, µ) = ∑ S⊆S′⊆[n] cS′µ |S′|.
Theorem 9. If the activation is even and for every i, j: C({i}, û0) + C({j}, û0) > 6û22 û0û4 C({i, j}, û0) then there exists an algorithm that can recover the underlying weight vectors.
5 CONCLUSION
In this work we show how activations in a deep network that have a high threshold make it easier to learn the lowest layer of the network. We show that for a large class of functions that represent the upper layers, the lowest layer can be learned with high precision. Even if the threshold is low we show that the sample complexity is polynomially bounded. An interesting open direction is to apply these methods to learn all layers recursively. It would also be interesting to obtain stronger results if the high thresholds are only present at a higher layer based on the intuition we discussed.
A PREREQUISITES
A.1 HERMITE POLYNOMIALS
Hermite polynomials form a complete orthogonal basis for the gaussian distribution with unit variance. For more details refer to Wikipedia contributors (2018). Let hi be the normalized hermite polynomials. They satisfy the following,
Fact 0. E[hn(x)] = 0 for n > 0 and E[h0(x)] = 1.
Fact 1. Ea∼N(0,1)[hi(a)hj(a)] = δij where δij = 1 iff i = j.
This can be extended to the following:
Fact 2. For a, b with marginal distribution N(0, 1) and correlation ρ, E[hi(a)hj(b)] = δijρj .
Consider the following expansion of u into the hermite basis (hi),
u(a) = ∞∑ i=0 ûihi(a).
Lemma 9. For unit norm vectors u, v, E[u(vTx)hj(wTx)] = ûj(vTw)j .
Proof. Observe that vTx and wTx have marginal distribution N(0, 1) and correlation vTw. Thus using Fact 2,
E[u(vTx)hj(wTx)] = ∞∑ i=1 ûiE[hi(vTx)hj(wTx)] = ∞∑ i=1 ûiδij(v Tw)j = ûj(v Tw)j .
For gaussians with mean 0 and variance σ2 define weighted hermite polynomials Hσl (a) = |σ|lhl(a/σ). Given input vTx for x ∼ N(0, I), we suppress the superscript σ = ||v||. Corollary 2. For a non-zero vector v (not necessarily unit norm) and a unit norm vector w, E[Hi(vTx)hj(wTx)] = δij(vTw)j .
Proof. It follows as the proof of the previous lemma,
E[u(vTx)hj(wTx)] = ∞∑ i=1 ûiE[hi(vTx)hj(wTx)] = ∞∑ i=1 ûiδij(v Tw)j = ûj(v Tw)j .
Fact 3. hn(x+ y) = 2− n 2 ∑n k=0 ( n k ) hn−k(x √ 2)hk(y √ 2). Fact 4. hn(γx) = ∑bn2 c k=0 γ n−2k(γ2 − 1)k ( n 2k ) (2k)! k! 2 −khn−2k(x).
Fact 5. α(n,m, γ) = E[hm(x)hn(γx)] = γn−2k(γ2 − 1)k ( n 2k ) (2k)! k! 2 −k for k = n−m2 if k ∈ Z + else 0.
A.2 PROPERTIES OF MATRICES
Consider matrix A ∈ Rm×m. Let σi(A) to be the ith singular value of A such that σ1(A) ≥ σ2(A) ≥ . . . ≥ σm(A) and set κ(A) = σ1(A)/σm(A).
Fact 6. |det(A)| = ∏m i=1 σi(A).
Fact 7. Let B be a (mk)× (mk) principal submatrix of A, then κ(B) ≤ κ(A).
A.3 ACTIVATION FUNCTIONS
Lemma 10. For u being a high threshold ReLU, that is, ut(a) = max(0, a− t) we have for t ≥ C for large enough constant C > 0, Eg∼N(0,σ2)[ut(g)] ≤ e− t2 2σ2 . Also, û4, û2 = tΘ(1)e− t2 2 .
Proof. We have
Eg∼N(0,σ2)[ut(g)] = 1√ 2πσ ∫ ∞ −∞ max(0, g − t)e− g2 2σ2 dg
= 1√ 2πσ ∫ ∞ t (g − t)e− g2 2σ2 dg
≤ 1√ 2πσ ∫ ∞ t ge− g2 2σ2 dg
= σ√ 2π ∫ ∞ t2
2σ2
e−hdh
= σ√ 2π e− t2 2σ2 .
Also,
û4 = Eg∼N(0,1)[ut(g)h4(g)]
= 1√ 2π ∫ ∞ −∞ max(0, g − t)(g4 − 6g2 + 3)e− g2 2 dg
= 1√ 2π ∫ ∞ t (g − t)(g4 − 6g2 + 3)e− g2 2 dg
≥ 1√ 2π (t4 − 6t2)1 t e− t2 2 −1− 1 2t2
≥ Ω ( t3e− t2 2 ) .
To upper bound,
û4 = 1√ 2π ∫ ∞ −∞ max(0, g − t)(g4 − 6g2 + 3)e− g2 2 dg
= 1√ 2π ∫ ∞ t (g − t)(g4 − 6g2 + 3)e− g2 2 dg
≤ 1√ 2π ∫ ∞ t 2g5e− g2 2 dg
= 1√ 2π ∫ ∞ t2
2
h2e−hdh
= O ( t4e− t2 2 ) .
Similar analysis holds for û2.
Observe that sgn can be bounded very similarly replacing g− t by 1 which can affect the bounds up to only a polynomial in t factor.
Lemma 11. For u being a high threshold sgn, that is, ut(a) = sgn(a − t) we have for t ≥ C for large enough constant C > 0, Eg∼N(0,σ2)[ut(g)] ≤ e− t2 2σ2 . Also, û4, û2 = tΘ(1)e− t2 2 .
For sigmoid, the dependence varies as follows:
Lemma 12. For u being a high threshold sigmoid, that is, ut(a) = 11+e−(a−t) we have for t ≥ C for large enough constant C > 0, Eg∼N(0,σ2)[ut(g)] ≤ e−t+ σ2 2 . Also, û4, û2 = Θ(e−t).
Proof. We have
Eg∼N(0,σ2)[ut(g)] = 1√ 2πσ ∫ ∞ −∞
1
1 + e−(g−t) e−
g2 2σ2 dg
= e−t√ 2πσ ∫ ∞ −∞
1
e−t + e−g e−
g2 2σ2 dg
≤ e −t
√ 2πσ ∫ ∞ −∞ ege− g2 2σ2 dg
= e−te
σ2
2
√ 2πσ ∫ ∞ −∞ e− (g−σ2)2 2σ2 dg
= e−te σ2 2
Also,
û4 = Eg∼N(0,1)[ut(g)h4(g)]
= 1√ 2π ∫ ∞ −∞
1
1 + e−(g−t) e−
g2
2 dg
= e−t√
2π ∫ ∞ −∞
1
e−t + e−g (g4 − 6g2 + 3)e−
g2
2 dg
≥ e −t √
2π ∫ ∞ 0
1
e−t + e−g (g4 − 6g2 + 3)e−
g2
2 dg
≥ e −t √
2π ∫ ∞ 0 1 2 (g4 − 6g2 + 3)e− g2 2 dg
= Ω(e−t).
We can upper bound similarly and bound û2.
B APPROXIMATE RECOVERY WITH LINEAR TERMS
B.1 CONSTRAINED OPTIMIZATION VIEW OF LANDSCAPE DESIGN
Let us consider the linear case with w∗i ’s are orthonormal. Consider the following maximization problem for even l ≥ 4,
max z∈Sn−1
sgn(ûl) · E [ f(x) ·Hl ( zTx )] where hl is the lth hermite polynomial. Then we have,
sgn(ûl) · E [ f(x) · hl ( zTx )] = sgn(ûl) · E [( k∑ i=1 ciut((w ∗ i ) Tx) ) · hl ( zTx )]
= sgn(ûl) · k∑ i=1 ciE [ ut((w ∗ i ) Tx) · hl ( zTx )] = |ûl|
k∑ i=1 ci((w ∗ i ) Tz)l.
It is easy to see that for z ∈ Sn−1, the above is maximized at exactly one of the wi’s (up to sign flip for even l) for l ≥ 3 as long as ul 6= 0. Thus, each wi is a local minima of the above problem.
Let L(z) = − ∑k i=1 ciz l i. For constraint ||z||2 = 1, we have the following optimality conditions (see Nocedal & Wright (2006) for more details).
First order:
∇L(z)− z T∇L(z) ||z||2 z = 0 and ||z||2 = 1.
This applied to our function gives us that for λ = − ∑ i ciz l i
||z||2 (λ < 0),
−lcizl−1i − 2λzi = 0
The above implies that either zi = 0 or zl−2i = − λlci with ||z||2 = 1. For this to hold z is such that for some set S ⊆ [n], |S| > 1, only i ∈ S have zi 6= 0 and ∑ i∈S z 2 i = 1. This implies that for all i ∈ S, zl−2i = − 2λlci .
Second order:
For all w 6= 0 such that wTz = 0,wT (∇2L(z)− 2λI)w ≥ 0. For our function, we have:
∇2L(z) = −l(l − 1)diag(c · z)l−2 =⇒ (∇2L(z))ij = {
2(l − 1)λ if i = j and i ∈ S 0 otherwise.
The last follows from using the first order condition. For the second order condition to be satisfied we will show that |S| = 1. Suppose |S| > 2, then choosing w such that wi = 0 for i 6∈ S and such that wTz = 0 (it is possible to choose such a value since |S| > 2), we get wT (∇2L(z)− 2λI)w = 2(l − 2)λ||w||2 which is negative since λ < 0, thus these cannot be global minima. However, for |S| = 1, we cannot have such a w, since to satisfy wTz = 0, we need wi = 0 for all i ∈ S, this gives us wT (∇2L(z)− 2λI)w = −2λ||w||2 which is always positive. Thus z = ±ei are the only local minimas of this problem.
B.2 IMPORTANT RESULTS FROM GE ET AL. (2017) Lemma 13 (Ge et al. (2017)). If z is an ( , τ)-local minima of F (z) = − ∑ i αiz 4 i +λ( ∑ i z 2 i −1)2
for ≤ √ τ3/αmin where αmin = mini αi, then
• (Lemma 5.2) |z|2nd ≤ √
τ αmin where |z|2nd denotes the magnitude of the second largest entry in terms of magnitude of z.
• (Derived from Proposition 5.7) zmax = ±1± O(dτ/αmin)± O( /λ) where |z|max is the value of the largest entry in terms of magnitude of z.
B.3 OMITTED PROOFS FOR ONE-BY-ONE RECOVERY
Proof of Lemma 1. Let O ∈ Rd×d be the orthonormal basis (row-wise) of the subspace spanned by w∗i for all i ∈ [d] generated using Gram-schmidt (with the procedure done in order with elements of |S| first). Now let OS ∈ R|S|×d be the matrix corresponding to the first S rows and let O⊥S ∈ R(d−|S|)×n be that corresponding to the remaining rows. Note that OW∗ (W∗ also has the same ordering) is an upper triangular matrix under this construction.
E ∏ j∈S ut((w ∗ j ) Tx) = 1
(2π)n/2 ∫ x ∏ i∈S ut(x Tw∗i )e − ||x|| 2 2 dx
= 1
(2π)n/2 ∫ x ∏ i∈S ut((OSw ∗ i ) TOSx)e − ||OSx|| 2+||O⊥S x|| 2 2 dx
=
( 1
(2π) |S| 2 ∫ x′∈R|S| ∏ i∈S ut((OSw ∗ i ) Tx′)e− ||x′||2 2 dx′ )( 1 (2π) d−|S| 2 ∫ x′∈Rd−|S| e− ||x′||2 2 dx′ )
= 1
(2π) |S| 2 ∫ x′∈R|S| ∏ i∈S ut((OSw ∗ i ) Tx′)e− ||x′||2 2 dx′
= |det(OSW∗S)|−1
(2π) |S| 2
∫ b∈R|S| ∏ i∈S ut(bi)e − ||(OSW ∗ S) −T b||2 2 db
Now observe that OSW∗S is also an upper triangular matrix since it is a principal sub-matrix of OW∗. Thus using Fact 6 and 7, we get the last equality. Also, the single non-zero entry row has non-zero entry being 1 (||w∗i || = 1 for all i). This gives us that the inverse will also have the single non-zero entry row has non-zero entry being 1. WLOG assume index 1 corresponds to this row. Thus we can split this as following
E ∏ j∈S ut((w ∗ j ) Tx) ≤ |det(OSW∗S)|−1 ( 1√ 2π ∫ b1 ut(b1)e − b 2 1 2 db1 ) ∏ i∈S\{1} 1√ 2π ∫ bi ut(bi)e − b 2 i 2||OSW ∗ S ||2 dbi
≤ |det(OSW∗S)|−1 ( 1√ 2π ∫ b1 ut(b1)e − b 2 1 2 db1 ) ∏ i∈S\{1} 1√ 2π ∫ bi ut(bi)e − b 2 i ||W∗||2 dbi
≤ ρ(t, 1) (κ(W∗)ρ(t, ||W∗||))|S|−1
Proof of Claim 1. Consider the SVD of matrix M = UDUT . Let W = UD−1/2 and yi =√ ciW Tw∗i for all i. It is easy to see that yi are orthogonal. Let F (z) = G(Wz):
F (z) = |û4| ∑ i ci(z TWTw∗i ) 4 − λû22 (∑ i ci(z TWTw∗i ) 2 − 1 )2
= |û4| ∑ i 1 ci (zTyi) 4 − λû22 (∑ i (zTyi) 2 − 1 )2 .
Since yi are orthogonal, for means of analysis, we can assume that yi = ei, thus the formulation reduces to maxz |û4| ∑ i 1 ci (zi) 4 − λ′ ( ||z||2 − 1 )2 up to scaling of λ′ = λû22. Note that this is of the form in Lemma 13 hence using that we can show that the approximate local minimas of F (z) are close to yi and thus the local maximas of G(z) are close to Wyi = √ ciWW Tw∗i = √ ciM
−1w∗i due to the linear transformation. This can alternately be viewed as the columns of (TW∗)−1 since TW∗M−1(TW∗)T = I.
Proof of Theorem 4. Let Z be an ( , τ)-local minimum of A, then we have ||∇A(Z)|| ≤ and λmin(∇2A(Z)) ≥ −τ . Observe that
||∇B(Z)|| = ||∇(A+ (B −A)(Z)|| ≤ ||∇A(Z)||+ ||∇(B −A)(Z)|| ≤ + ρ.
Also observe that
λmin(∇2B(Z)) = λmin(∇2(A+ (B −A))(Z)) ≥ λmin(∇2A(Z)) + λmin(∇2(B −A)(Z)) ≥ −τ − ||∇2(B −A)(Z)|| ≥ −τ − γ
Here we use |λmin(M)| ≤ ||M|| for any symmetric matrix. To prove this, we have ||M|| = maxx∈Sn−1 ||Mx||. We have x = ∑ i xivi where vi are the eigenvectors. Thus we have Mx =∑
i xiλi(M)vi and ∑ x2i = 1. Which gives us that ||M|| = √∑ i x 2 iλ 2 i (M) ≥ |λmin(M)|.
Proof of Lemma 2. Expanding f , we have
E[|∆(x)|] = E ∣∣∣∣∣∣ ∑
S⊆[d]:|S|>1 cS ∏ j∈S ut((w ∗ j ) Tx) ∣∣∣∣∣∣
≤ ∑
S⊆[d]:|S|>1
|cS |E ∏ j∈S ut((w ∗ j ) Tx) using Lemma 1 ≤ C
∑ S⊆[d]:|S|>1 ρ(t, 1) ( 1 σmin(W∗) ρ(t, ||W∗||) )|S|−1
= C d∑ i=1 ( d i ) ρ(t, 1) ( 1 σmin(W∗) ρ(t, ||W∗||) )i−1
using ( d
i
) ≤ di ≤ C d∑ i=1 dρ(t, 1) ( d σmin(W∗) ρ(t, ||W∗||) )i−1 using assumption on t ≤ Cd2ρ(t, 1) ( d
σmin(W∗) ρ(t, ||W∗||)
)
Lemma 14. For any function L such that ||L(z,x)|| ≤ C(z)||x||O(1) where C is a function that is not dependent on x, we have ||E[∆(x)L(x)]|| ≤ C(z)d−(1+p)η+3O(log d).
Proof. We have
||E[∆(x)L(x)]|| ≤ E[|∆(x)||L(x)||] ≤ E[|∆(x)C(z)||x||O(1)]
= C(z) ( E[|∆(x)| ||x||O(1)| ||x|| ≥ c]Pr[||x|| ≥ c]
+ E[|∆(x)| ||x||O(1)| ||x|| < c]Pr[||x|| < c] )
≤ C(z)(E[||x||O(1)|||x|| ≥ c]Pr[||x|| ≥ c] + cE[|∆(x)|])
= C(z)(cO(1)e− c2 2 + cO(1)E[|∆(x)|]).
Now using Lemma 2 to bound E[|∆(x)|], for c = Θ( √ η log d we get the required result.
Lemma 15. For ||z|| = Ω(1) and λ = Θ(|û4|/û22) ≈ dη , ||∇G(z)|| ≥ Ω(1)d−η .
Proof. Let K = κ(W∗) which by assumption is θ(1). We will argue that local minima of G cannot have z with large norm. First lets argue this for Glin(z). We know that Glin(z) = −α ∑ (zTw∗i )
4 + λβ2(( ∑ (zTw∗i )
2)− 1)2 where α = |û4| and β = û2. We will argue that zT∇Glin(z) is large if z is large.
zT∇Glin(z) = −4α ∑ (zTw∗i ) 3(zTw∗i ) + 2λβ 2 (∑ (zTw∗i ) 2 − 1 )(∑ 2(zTw∗i )(z Tw∗i ) )
= −4α ∑
(zTw∗i ) 4 + 4λβ2 (∑ (zTw∗i ) 2 − 1 )(∑ (zTw∗i ) 2 )
Let y = W∗z then K||z|| ≥ ||y|| ≥ ||z||/K since K is the condition number of W∗. Then this implies
zT∇Glin(z) = −4α ∑ y4i + 4λβ 2(||y||2 − 1)||y||2
= 4||y||2((−α+ λβ2)||y||2 + λβ2) ≥ ||y||4(−α+ λβ2) ≥ Ω(1)d−η||y||4
Since ||y|| ≥ ||z||/K = Ω(1) by assumptions on λ, z we have zT∇Glin(z) ≥ Ω(λβ2||y||4) = Ω(1)d−η||z||4. This implies ||∇Glin(z)|| = Ω(1)d−η||z||3. Now we need to argue for G. G(z)−Glin(z) = −sgn(û4)E[(flin(x) + ∆(x))H4(zTx)] + λ(E[(flin(x) + ∆(x))H2(zTx)]− β)2
+ sgn(û4)E[(flin(x))H4(zTx)]− λE[(flin(x))H2(zTx)]− β]2
= −sgn(û4)E[∆(x)H4(zTx)] + λE[∆(x)H2(zTx)]2 + 2λE[∆(x)H2(zTx)]E[flin(x)H2(zTx)− β] = −sgn(û4)||z||4E[∆(x)h4(zTx/||z||)] + λ||z||4E[∆(x)h2(zTx/||z||)]2
+ 2λ||z||4E[∆(x)h2(zTx/||z||)]E[flin(x)h2(zTx/||z||)]− 2λβ||z||2E[∆(x)h2(zTx/||z||)] Now h4(zTx/||z||) doesn’t have a gradient in the direction of z so zT∇h4(zTx/||z||) = 0. Similarly zT∇h2(zTx/||z||) = 0. So zT∇(G(z)−Glin(z)) = −4sgn(û4)||z||4E[∆(x)h4(zTx/||z||)] + 4λ||z||4(E[∆(x)h2(zTx/||z||)])2
+ 8λ||z||4E[∆(x)h2(zTx/||z||)]E[flin(x)h2(zTx/||z||)]− 4λβ||z||2E[∆(x)h2(zTx/||z||)] We know that E[flin(x)h2(zTx/||z||)] has a factor of β giving us using Lemma 14:
|zT∇(G(z)−Glin(z))| ≤ O(log d)d−(1+p)η+3||z||4.
So zT∇G(z) is also Ω(||z||4). so ||∇G(z)|| ≥ Ω(1)d−η
Proof of Claim 2. We have G−Glin as follows, G(z)−Glin(z) = −sgn(û4)E[(flin(x) + ∆(x))H4(zTx)] + λ(E[(flin(x) + ∆(x))H2(zTx)]− û2)2
+ sgn(û4)E[(flin(x))H4(zTx)]− λ(E[(flin(x))H2(zTx)]− û2)2
= −sgn(û4)E[∆(x)H4(zTx)] + λ(E[∆(x)H2(zTx)])2
+ 2λE[∆(x)H2(zTx)]E[flin(x)H2(zTx)− û2] Thus we have,
∇(G(z)−Glin(z)) = −sgn(û4)E[∆(x)∇H4(zTx)] + 2λE[∆(x)H2(zTx)]E[∆(x)∇H2(zTx)]
+ 2λE[flin(x)H2(zTx)− û2]E[∆(x)∇H2(zTx)] + 2λE[∆(x)H2(zTx)]E[flin(x)∇H2(zTx)]
Observe that H2 and H4 are degree 2 and 4 (respectively) polynomials thus norm of gradient and hessian of the same can be bounded by at most O(||z||||x||4). Using Lemma 14 we can bound each term by roughly O(log d)d−(1+p)η+3||z||4. Note that λ being large does not hurt as it is scaled appropriately in each term. Subsequently, using Lemma 15, we can show that ||z|| is bounded by a constant since ||G(z)|| ≤ d−2η . Similar analysis holds for the hessian too.
Now applying Theorem 4 gives us that z is an (O(log d)d−(1+p)η+3, O(log d)d−(1+p)η+3)approximate local minima of Glin. This implies that it is also an ( ′ := C log(d)d−(1+2p)η+3, τ ′ := C log(d)d−(1+2p/3)η+3)-approximate local minima of Glin for large enough C > 0 by increasing τ . Observe that √ τ3/|û4| = C3/2 log3/2(d)d−(3/2+p)η+9/2/d−η/2 = C3/2 log3/2(d)d−(1+p)η+9/2 ≥ ′. Now using Claim 1, we get the required result.
B.4 SIMULTANEOUS RECOVERY
Ge et al. (2017) also showed simultaneous recovery by minimizing the following loss function Glin defined below has a well-behaved landscape.
Glin(W) = E flin(x) ∑ j,k∈[d],j 6=k ψ(wj ,wk,x) − γE flin(x) ∑ j∈[d] H4(w T j x) (1)
+ λ ∑ i ( E [ flin(x)H2(w T i x) ] − û2 )2 (2)
where ψ(v, w,x) = H2(vTx)H2(wTx) + 2(vTw)2 + 4(vTx)(wTx)vTw.
They gave the following result.
Theorem 10 (Ge et al. (2017)). Let c be a sufficiently small universal constant (e.g. c = 0.01 suffices), and suppose the activation function u satisfies û4 6= 0. Assume γ ≤ c, λ ≥ Ω(|û4|/û22), and W∗ be the true weight matrix. The function Glin satisfies the following:
1. Any saddle point W has a strictly negative curvature in the sense that λmin(∇2Glin(W)) ≥ −τ0 where τ0 = cmin{γ|û4|/d, λû22}.
2. Suppose W is an ( , τ0)-approximate local minimum, then W can be written as W−T = PDW∗ + E where D is a diagonal matrix with Dii ∈ {±1±O(γ|û4|/λû22)±O( /λ)}, P is a permutation matrix, and the error term ||E|| ≤ O( d/û4).
We show that this minimization is robust. Let us consider the corresponding function G to Glin with the additional non-linear terms as follows:
G(W) = E f(x) ∑ j,k∈[d],j 6=k ψ(wj ,wd,x) − γE f(x) ∑ j∈[d] H4(wj ,x) + λ
∑ i (E [f(x)H2(wi,x)]− û2)2
Now we can show that G and Glin are close as in the one-by-one case.
R(W) := G(W)−Glin(W) = E [∆(x)A(W,x)]− γE [∆(x)B(W,x)] + λ ( E [f(x)C(W,x)]2 − E [flin(x)C(W,x)]2 ) = E [∆(x)A(W,x)]− γE [∆(x)B(W,x)] + λE [(∆(x)C(W,x)(f(x′) + flin(x′))C(W,x′)] = E [∆(x)A(W,x)]− γE [∆(x)B(W,x)] + λE [(∆(x)D(W,x)] = E [∆(x)(A(W,x)− γB(W,x) + λD(W,x))] = E [∆(x)L(W,x)]
where A(W,x) = ∑ j,k∈[d],j 6=k ψ(wj ,wd,x), B(W,x) = ∑ j∈[d]H4(wj ,x), C(W,x) =∑
iH2(wi,x), D(W,x) = C(W,x)E[(f(x′)+flin(x′))C(W,x′)] and L(W,x) = A(W,x)− γB(W,x) + λD(W,x).
Using similar analysis as the one-by-one case, we can show the required closeness. It is easy to see that ||∇L|| and ||∇2L|| will be bounded above by a constant degree polynomial in O(log d)d−(1+p)η+3 max ||wi||4. No row can have large weight as if any row is large, then looking at the gradient for that row, it reduces to the one-by-one case, and there it can not be larger than a constant. Thus we have the same closeness as in the one-by-one case. Combining this with Theorem 10 and 4, we have the following theorem:
Theorem 11. Let c be a sufficiently small universal constant (e.g. c = 0.01 suffices), and under Assumptions 1, 2 and 3. Assume γ ≤ c, λ = Θ(dη), and W∗ be the true weight matrix. The function G satisfies the following
1. Any saddle point W has a strictly negative curvature in the sense that λmin(∇2Glin(W)) ≥ −τ where τ0 = O(log d)d−Ω(1).
2. Suppose W is a (d−Ω(1), d−Ω(1))-approximate local minimum, then W can be written as W−T = PDW∗ + E where D is a diagonal matrix with Dii ∈ {±1±O(γ)± d−Ω(1))}, P is a permutation matrix, and the error term ||E|| ≤ O(log d)d−Ω(1).
Using standard optimization techniques we can find a local minima.
B.5 APPROXIMATE TO ARBITRARY CLOSE
Lemma 16. If u is the sign function then E[u(wTx)δ′(zTx)] = c| cot(α)| where w, z are unit vectors and α is the angle between them and c is some constant.
Proof. WLOG we can work the in the plane spanned by z and w and assume that z is the vector i along and w = i cosα + j sinα. Thus we can replace the vector x by ix + jy where x, y are normally distributed scalars. Also note that u′ = δ (Dirac delta function).
E[u(wTx)δ′(zTx)] = E[u(x cosα+ y sinα)δ′(x)]
= ∫ y ∫ x u(x cosα+ y sinα)δ′(x)φ(x)φ(y)dxdy
Using the fact that ∫ x δ′(x)h(x)dx = h′(0) this becomes
= ∫ y φ(y)[(∂/∂x)u(x cosα+ y sinα)φ(x)]x=0dy
= ∫ y φ(y)[n(x)u′(x cosα+ y sinα) cosα+ φ′(x)u(x cosα+ y sinα)]x=0dy
= ∫ ∞ y=−∞ φ(y)φ(0)δ(y sinα) cosαdy
Substituting s = y sinα this becomes
= ∫ ∞/ sinα s=−∞/ sinα φ(s/ sinα)φ(0)δ(s) cosα(1/ sinα)ds
=sgn(sinα) cot(α)φ(0) ∫ s φ(s/ sinα)δ(s)ds =| cot(α)|φ(0)φ(0)
Proof of Lemma 4. Let us compute the probability of lying in the -band for any t:
Pr[x ∈ l(z, t, )] = Pr[t− ≤ zTx ≤ t] = Pr g∈N(0,||z||2) [t− ≤ g ≤ t]
= 1√
2π||z|| ∫ t g=t− e − g 2 2||z||2 dg = √ 2π||z|| e − t̄2 2||z||2
where the last equality follows from the mean-value theorem for some t̄ ∈ [t− , t]. Next we compute the following:
Pr[xTw∗1 ≥ t and x ∈ l(z, t′, )]
= 1
(2π) n 2 ∫ x sgn(x1 − t)1[x ∈ l(z, t′, )]e− ||x||2 2 dx
= 1
(2π) 1 2 ∫ ∞ x1=t e− x21 2 ( 1 (2π) n−1 2 ∫ x−1 1[x−1 ∈ l(z−1, t′ − z1x1, )]e− ||x−1|| 2 2 dx−1 ) dx1
= 1
(2π) 1 2 ∫ ∞ x1=t e− x21 2 Pr[x−1 ∈ l(z−1, t− z1x1, )]dx−1
=
2π||z−1|| ∫ t′ g=t′− ∫ ∞ x1=t e− x21 2 e − (g−z1x1) 2 2||z−1||2 dx1dg
= 1
2π||z−1|| ∫ t′ g=t′− e − g 2 2||z||2 ∫ ∞ x1=t e −
( x1−
gz1 ||z||2 )2 2 ||z−1||2
||z||2 dx1dg
= 1√
2π||z|| ∫ t′ g=t′− e − g 2 2||z||2 Φc ( t||z||2 − gz1) ||z−1| |||z|| ) dg
= √ 2π e−
t∗2 2 Φc ( t− t ∗ cos(α1) | sin(α1)| ) where the last equality follows from the mean-value theorem for some t∗ ∈ [t′ − , t′]. Combining, we get:
Pr[xTw1∗ ≥ t and x ∈ l(z, t′, )|x ∈ l(z, t, )]
= e− t∗2−t̄2 2 Φc ( t− t∗ cos(α1) | sin(α1)| ) = Φc ( t− t∗ cos(α1) | sin(α1)| ) ±O( )t′
for ≤ 1/t′.
Proof of Lemma 5. Recall that P is monotone with positive linear term, thus for high threshold u (0 unless input exceeds t and positive after) we have sgn(f(x)) = ∨sgn(xTw∗i − t). This is because, for any i, P applied to Xi > 0 and ∀j 6= i,Xj = 0 gives us ci which is positive. Also, P (0) = 0. Thus, sgn(P ) is 1 if any of the inputs are positive. Using this, we have,
Pr[sgn(f(x))|x ∈ l(z, t′, )] ≥ Pr[sgn((w∗1)Tx− t)|x ∈ l(z, t′, )] Also,
Pr[sgn(f(x))|x ∈ l(z, t′, )] ≤ ∑
Pr[sgn(xTw∗i − t)|x ∈ l(z, t′, )]
= Pr[sgn((w∗1) Tx− t)|x ∈ l(z, t′, )] + ∑ i 6=1 Pr[sgn(xTw∗i − t)|x ∈ l(z, t′, )]
≤ Pr[sgn((w∗1)Tx− t)|x ∈ l(z, t, )] + η where ∑ i 6=1 Pr[sgn(x
Tw∗i − t)|x ∈ l(z, t′, )] ≤ η. We will show that η is not large since a z is close to one of the vectors, it can not be close to the others thus αi will be large for all i 6= j. Let us bound η,∑
i6=1 Pr[sgn(xTw∗i − t)|x ∈ l(z, t′, )] ≤ ∑ i 6=1 ( Φc ( t− t∗i cos(αi) | sin(αi)| ) +O( )t′i )
≤ ∑ i6=1 ( Φc ( t− t∗i cos(αi) | sin(αi)| ) +O( )t′ )
≤ ∑ i6=1 ( Φc ( t− t′ cos(αi) | sin(αi)| ) +O( )t′ ) ≤ ∑ i6=1 1√ 2πγi e− γ2i 2 +O( )kt′
where γi = t−t′ cos(αi) | sin(αi)| . The above follows since γi ≥ 0 by assumption on t ′. Under the assumption, let β = maxi 6=1 cos(αi) we have
γi ≥ t ( 1− βcos(α1) )
√ 1− β2
= Ω(t)
under our setting. Thus we have,∑ i 6=1 Pr[sgn(xTw∗i − t))|x ∈ l(z, t′, )] ≤ de−Ω(t 2) +O( )dt = de−Ω(t 2)
for small enough . | 1. What are the assumptions made by the paper regarding the learning parameters of a neural network?
2. Why are these assumptions considered unrealistic or not true in real-world scenarios?
3. How do these assumptions impact the effectiveness of the proposed algorithm?
4. Are there any alternative approaches that could be explored to address these limitations?
5. How might the paper's findings be improved or expanded upon to better reflect real-world neural networks? | Review | Review
This paper gives a new algorithm for learning parameters of neural network under several assumptions: 1. the threshold for the first layer is very high; 2. the future layers of the neural network can be approximated by a polynomial. 3. The input distribution is Gaussian.
It is unclear why any of these assumptions are true. For 1, the thresholds in neural networks are certainly not as high as required in the algorithm (for the threshold in the paper after the first layer the neurons will be super sparse/often even just equal to 0, this is not really observed in real neural networks). For 2, there are no general results showing neural networks can be effectively approximated by low degree polynomials, and, if the future networks can be approximated, what prevents you from just assuming the entire neural network is a low degree polynomial? People have tried fitting polynomials and that does not perform nearly as well as neural networks.
The proof of the paper makes the problem even more clear because the paper shows that with this high threshold in the first layer, the future layers just behave linearly. This is again very far from true in any real neural networks.
Overall I'm OK with making some strong assumptions in order to prove some results for neural networks - after all it is a very difficult problem. However, this paper makes too many unrealistic assumptions. It's OK to make one of these assumptions, maybe 2, but 3 is too much for me. |
ICLR | Title
Feature Grinding: Efficient Backdoor Sanitation in Deep Neural Networks
Abstract
Training deep neural networks (DNNs) is expensive and for this reason, third parties provide computational resources to train models. This makes DNNs vulnerable to backdoor attacks, in which the third party maliciously injects hidden functionalities in the model at training time. Removing a backdoor is challenging because although the defender has access to a clean, labeled dataset, they only have limited computational resources which are a fraction of the resources required to train a model from scratch. We propose Feature Grinding as an efficient, randomized backdoor sanitation technique against seven contemporary backdoors on CIFAR-10 and ImageNet. Feature Grinding requires at most six percent of the model’s training time on CIFAR-10 and at most two percent on ImageNet for sanitizing the surveyed backdoors. We compare Feature Grinding with five other sanitation methods and find that it is often the most effective at decreasing the backdoor’s success rate while preserving a high model accuracy. Our experiments include an ablation study over multiple parameters for each backdoor attack and sanitation technique to ensure a fair evaluation of all methods. Models suspected of containing a backdoor can be Feature Grinded using limited resources, which makes it a practical defense against backdoors that can be incorporated into any standard training procedure.
1 INTRODUCTION
Deep neural networks (DNNs) are large and complex. Many systems deployed in the real world make use of DNNs, such as surveillance systems (Singh et al., 2018; Wang et al., 2017), self-driving cars (Bojarski et al., 2016; Dosovitskiy et al., 2017), and biometric authentication systems (Boles & Rad, 2017; Liu et al., 2018b). Training DNNs is expensive and for this reason, computation is often outsourced to third parties or publicly available, pre-trained DNNs are re-used. This convenience comes at the cost of security, as these third parties may act maliciously.
A critical security threat are backdoor attacks. Thereby, the third-party embeds hidden functionality into a trojan model that forces targeted misclassifications when a trigger is present in an input. The model functions normally for inputs without a trigger. In practice, backdoors can lead to crashes in self-driving cars (Versprille, 2015), surveillance systems with blind spots (Cooper, 2014), and biometric authentication systems granting access to unauthorized persons (Lovisotto et al., 2020).
Backdoor attacks and defenses are a well-studied subject in the field of secure machine learning. Backdoor attacks have the goal to remain effective by achieving a high attack success rate, being hard to detect and robust against model modification and sanitation. Existing backdoor attacks assume various capabilities of an attacker, such as (i) poisoning the training dataset (Gu et al., 2017; Liu et al., 2017; 2020; Pang et al., 2020a; Shokri et al., 2020), (ii) modifying the model’s training code (Turner et al., 2018; Saha et al., 2020) or (iii) controlling the trojaned model’s architecture (and parameters) (Hong et al., 2021; Yao et al., 2019; Tang et al., 2020).
Backdoor defenses aim to decrease the attack’s success rate as much as possible to make the exploitation of a backdoor unreliable in practice. Thereby, the defender has access to a set of non-poisoned, clean data with ground-truth labels and is given a model suspected of containing a backdoor. Defenses can be deployed at the model’s inference or training stage. Defenses deployed at inference time either pre-process inputs with the goal to render triggers unrecognizable (Cohen et al., 2019; Meng & Chen, 2017), or they run a detection algorithm for every input to predict whether it contains
a trigger (Chen et al., 2018; Udeshi et al., 2019). Defenses deployed during training either preemptively sanitize a model suspected of containing a backdoor (Liu et al., 2018a; Li et al., 2021), or they run a backdoor detection algorithm before sanitation (Wang et al., 2019; Guo et al., 2019).
Existing backdoor defenses are evaluated with a focus on their effectiveness at sanitizing a backdoor while maintaining the model’s utility. We find that evaluating the defense’s efficiency is often neglected. For example, the runtime of Neural Cleanse (Wang et al., 2019) scales proportionally with the number of classes, which can be feasible for 10 classes, but becomes infeasible for 1k classes. In practice, the motivation of a defender to engage with third parties and rely on their pre-trained models or computational resources is often rooted in a lack of adequate resources in the defender’s control. Maintaining high-performance hardware may be more expensive than booking resources on-demand from third parties (Saiyeda & Mir, 2017). Pre-trained models are readily available online at low or no cost1. Defenses have to be executed in a trusted environment on resources available to the defender. The decision of whether to use a defense is bounded by the defender’s available resources. We believe that a simple, minimal defense leveraging as few computational resources as possible while remaining effective is missing from related work.
We propose Feature Grinding as an efficient backdoor sanitation method. Feature Grinding requires low computational resources compared with four state-of-the-art backdoor sanitation approaches and achieves similar effectiveness. Our defense acts as a regularization method on the penultimate layer, also referred to as the feature layer, of the trojan DNN. The goal is to apply a transformation that increases the distance between predicted features of clean and trojan samples from the same target class. Feature Grinding requires only access to clean samples. Our experiments on the image classification datasets CIFAR-10 (Krizhevsky et al., 2009) and ImageNet (Deng et al., 2009) demonstrate that these transformations can be (i) learned quickly by the trojan model and (ii) that they sanitize backdoors effectively.
1.1 CONTRIBUTIONS
In summary, we claim the following contributions.
1. We propose an efficient backdoor sanitation method called Feature Grinding that can be performed with limited computational resources.
2. Feature Grinding sanitizes trojan models from all seven surveyed backdoor attacks.
3. We conduct an extensive evaluation on the image classification datasets CIFAR-10 and ImageNet, comparing Feature Grinding with four other backdoor sanitation methods.
4. We propose a metric to compare sanitation methods called Area Under the Pareto-Optimal Curve (AUPOC). AUPOC shows the trade-off between the clean data accuracy (CDA) and the attack’s success rate (ASR). We ablate over sets of parameters for each defense and compute the AUC for the best (i.e., Pareto-optimal) parameters given the CDA and ASR.
2 RELATED WORK
2.1 BACKDOORS ATTACK
A deep neural network (DNN) contains a backdoor if an attacker can add a secret backdoor trigger to inputs presented to a victim model, which causes targeted misclassifications. These triggers are typically hidden (e.g., small or imperceptible) and are only known to the attacker. Existing backdoor attacks assume different capabilities of an attacker, which can be summarized as follows.
• Poisoning: The attacker can inject poisoned samples into the training dataset. In clean label attacks, the attacker can only poison inputs, but cannot control their target labels.
• Training Code: The attacker can modify the training code (e.g., the model’s loss function).
• Model Architecture: The attacker has control over the victim model’s architecture.
1https://modelzoo.co
We study seven contemporary backdoor attacks from related work. In this paper, we focus on attacks that assume the attacker can (i) poison the training data or (ii) modify the training code. The seven surveyed backdoor attacks from related work can be summarized as follows.
Badnet (Gu et al., 2017) assumes that an attacker can poison the training data, but not modify the training code. The authors propose injecting samples using static trigger patterns such as a white square with poisoned labels. Clean-Label (Turner et al., 2018) is the first poisoning attack that does not require changing the poisoned input’s target labels. This makes it more difficult for the defender to remove poisoned inputs from their dataset before training the model. They stamp a nearly opaque trigger on inputs from the target class and adversarially perturb them to impede the victim model’s ability to learn from the image’s content and instead learn to associate the trigger with the target class. Refool (Liu et al., 2020) is a clean label attack that uses natural reflections as a trigger, blending other images with images from the target class, to embed their backdoor.
TrojanNN (Liu et al., 2017) requires control over the model’s training code for fine-tuning the victim model. They generate a modified trigger pattern optimized for exciting a subset of the victim model’s internal neurons and then fine-tune the model to embed a backdoor with these triggers. The Latent backdoor (Yao et al., 2019) has been designed to withstand the transfer learning process, in which a pre-trained, trojan teacher is fine-tuned for a related, but different task. They assume full control over the teacher’s training process and add a loss term that encourages embedding the backdoor in the lower layers of the teacher model. The Adversarial Backdoor Embedding (ABE) method (Shokri et al., 2020) modifies the model’s loss function during training to minimize the distance between poisoned and clean samples in the victim model’s feature layer. The Input-Model Co-Optimization (IMC) method (Pang et al., 2020a) optimizes both the victim model and the trigger used to implement the backdoor. The authors formulate an objective to craft a backdoor in conjunction with a trojan model and use alternating optimization of the model and inputs.
2.2 BACKDOOR DEFENSES
The goal of a backdoor defense is to prevent an attacker from exploiting their backdoor by either suppressing it through input preprocessing or by sanitizing the model. Following the categorization by Pang et al. (2020b), there are four categories of backdoor defenses.
• Input Reformation: Each input to the victim model is pre-processed with the goal to disable a trigger by rendering it unrecognizable to the victim model.
• Input Filtering: A binary classification is made for each input to the victim model whether it contains a trigger. Inputs that are predicted to contain a trigger can be discarded.
• Model Sanitation: The victim model is modified with the goal of losing the backdoor’s functionality while preserving the task’s functionality. Sanitation is done preemptively without prior detection of a backdoor.
• Model Inspection: An extension of model sanitation methods that first discover and reverse-engineer backdoors before sanitizing the model.
We survey four model sanitation or inspection methods from related work. All methods assume access to (i) the trojan model’s parameters and (ii) a subset of clean training data, but not to the backdoor’s trigger patterns. Fine-Pruning (Liu et al., 2018a) iteratively prunes dormant neurons that are suspected to implement a backdoor. Neural Attention Distillation (NAD) (Li et al., 2021) is a method to quickly distill a student using the teacher’s attention maps without retraining the student from scratch. Neural Cleanse (Wang et al., 2019) and TABOR (Guo et al., 2019) are model inspection methods that reverse-engineer a backdoor by an optimization process. The objective is to optimize inputs for a static trigger pattern that adversarially modifies the trojan model’s prediction of any input towards a target class. Since the target class is unknown, both methods iterate through each candidate target class and generate at least one trigger per class. TABOR is an extension of Neural Cleanse that specifically allows reverse-engineering large and complex triggers. The authors propose two methods to remove reverse-engineered triggers from a model. The first approach uses unlearning in which the model is fine-tuned with the trigger and the ground-truth label. The second approach uses pruning, in which those internal neurons of the trojan model are removed with the highest activation for the reverse-engineered trigger pattern.
3 THREAT MODEL
Training a DNN model is expensive and hence computation is often outsourced to third parties. As input, the defender specifies (i) the training dataset including ground-truth labels, (ii) the training code and (iii) the model architecture. During training, the attacker (i) injects poisoned data into the training dataset and (ii) modifies the training code, but they cannot modify the trojan model’s architecture. After training, the attacker sends the trojan model to the defender. The defender’s objective is to sanitize any backdoors present in the model using limited computational resources.
The defender’s primary goal is to sanitize the backdoor against a targeted attacker. A targeted attacker wins if the trojan model has a high success rate of predicting a trojan input with the target label. Given a trigger δ, a clean dataset D, a target label t and a model M , the attacker wins if the following condition holds for > 0.
Pr x∈D
[M(x+ δ) = t] > 1−
The defender’s secondary goal is to defend against a non-targeted attacker. A non-targeted attacker wins when the model has a high probability of misclassifying trojan inputs as any target class other than the ground-truth. Defending against non-targeted attacks is more challenging for the defender.
We believe that a defender with limited computational resources is a realistic and practically relevant assumption for multiple reasons. Prices for third party computational resources are decreasing and in many scenarios, it becomes more affordable to occasionally book external resources rather than continually maintaining dedicated hardware. Professional third party hardware often features hightiered GPUs that allow for rapid development due to low training times. A defender may have the capabilities to run a few training steps themselves on their hardware. However, the aforementioned higher monetary costs and runtimes may deter the defender from performing backdoor sanitation and make them more inclined to accept the risk of a backdoor. The most effective backdoor sanitation approaches are useless if their runtime exceeds the defender’s limits. Efficient backdoor sanitation strategies try to minimize the runtime, to allow acting on even a weak suspicion of a backdoor present in their model. We do not put hard constraints on the defender’s computational power, but instead, compare sanitation methods by their execution time relative to the model’s training time.
In summary, our attacker has access to the entire training dataset and its ground-truth labels, the model’s training code, and significant computational resources. However, they cannot modify the model’s architecture without the defender taking notice. Our defender has access to the entire training dataset and the ground-truth labels, the trojan model, but only limited computational resources.
4 FEATURE GRINDING
4.1 MOTIVATION
A DNN classifier is composed of a sequence of functions σ(fm(fm−1(..f1(x)))), where fi represents the i-th hidden layer, σ(·) is the softmax activation and x ∈ Rn represents the input. Feature grinding is applied to the penultimate layer fm−1(·), of the DNN model, which we refer to as the model’s feature extractor. The feature extractor of a model M is commonly referred to as φM (·). This layer is a compressed representation of the input and the features form a high-dimensional feature space. It has been observed that distances between features are semantically meaningful for well-trained models with respect to the task (Zhang et al., 2018).
Inputs belonging to the same class typically form dense clusters in the feature space. A backdoor’s objective is to map a trojan input to a feature vector located within the cluster composing the target class. The goal of a backdoor sanitation method is to disentangle the trojan samples from the cluster of the target class by increasing the separation between the clusters of clean and trojan samples. However, the defender does not know the trigger pattern. There are two conceptual approaches to achieve this disentanglement. Either, the trojan examples are reverse-engineered and then sanitized by projecting them to a different area in the feature space, or all clean examples are moved to a different region in the feature space.
Difficulties with the first approach (reverse-engineering) are that (i) formalizing complex triggers spanning multiple regions of the image may be difficult and (ii) it is computationally expensive to
optimize for an unknown trigger and target label. Once the trigger has been reverse-engineered, an advantage is that the trojan model can be updated with minimal side-effects through unlearning or pruning with the expectation that the model retains a high test accuracy. This idea motivates model inspection methods such as Neural Cleanse (Wang et al., 2019) and TABOR (Guo et al., 2019).
A different methodology, used in Feature Grinding, is to modify the trojaned model using only clean samples without attempting to reverse-engineer trigger patterns. This method is expected to have greater side-effects to the model, but it requires significantly fewer computational resources. For example, fine-pruning (Liu et al., 2018a) updates the trojaned model purely on its actions on clean data. It prunes neurons that are least active when clean data is passed through the model. The goal of Feature Grinding is to relocate all clean samples to a different region in the feature space, as illustrated in Figure 1. The hypothesis is that by moving clean samples, the trojaned samples retain their feature representation which disentangles them from the clean target samples.
4.2 GRINDING AND RESTORATION
The goal of Feature Grinding is to modify the model’s feature extractor so that the updated, grinded model’s feature extractor predicts transformed features. First, the defender resets the parameters of the model’s head, which refers to the weights and biases of all layers that are on top of the model’s feature extraction layer. Then, the defender records all feature activations of the training dataset and perturbs them by applying a static, randomized transformation function before fine-tuning the victim model on these transformed features.
Feature Grinding passes through two phases: grinding and restoration. In the grinding phase, the victim model is fine-tuned to predict the transformed features given clean samples as inputs. The model may lose some of its test accuracy during the grinding phase. As compensation, we use a restoration phase that focuses on regaining the model’s test accuracy. Both phases can be incorporated into any standard training procedure by altering the model’s loss function.
Assume the defender receives a trojan model M and wants to derive a model M ′ that has been sanitized of any backdoor present in M . When using Feature Grinding, the defender adds a term Lf to the model’s loss during the grinding and restoration phases. Lf can be described as follows, given some transformation T (·), the feature extractor φM (·) and some input x for model M .
Lf (x) = ‖φM ′(x)− T (φM (x))‖
The total loss L for both phases is a sum of the task loss Lt and the Feature Grinding loss Lf . We use a parameter α ∈ R to trade-off both loss terms. In our experiments, we use α = 0.8 during the grinding phase and α = 0.2 during the restoration phase.
L(x) = αLf (x) + (1− α)Lt(x)
4.3 TRANSFORMATION
The transformation function is used to perturb the feature space of the victim model. Choosing a transformation function influences the efficiency of Feature Grinding. An effective transformation ensures that the success rate of the backdoor is decreased as much as possible (e.g., by retraining the entire model). An efficient transformation adds an additional constraint by trying to minimize the resources spent, i.e., the number of steps needed for the victim model to learn the transformed feature space, while remaining effective at sanitizing the backdoor.
Proposed Transformations. We design transformation functions that follow our intuition on achieving high efficiency. We experiment with the following transformation functions.
1. Permute (Tp): The perturbation consists of a random permutation of all features. The permutation is sampled randomly once and then applied to all features.
2. Rotate (Tr): The features are rotated in the n-dimensional space by sampling random rotation matrices using the approach of Stewart (Stewart, 1980).
3. Rotate-2d (Tr2)): The features are rotated in a randomly sampled, two dimensional plane by an angle θ.
All surveyed transformations are (i) sampled randomly once and (ii) they are automorphisms. Randomization is important because the applied transformation must be kept secret from the adversary to avoid them from adapting their backdoor to the transformation. The defender should post-hoc dispose of their records about the applied transformation. We believe that transformations which are automorphisms are advantageous because they preserve the structure of the high-dimensional space which shortens the restoration phase.
5 EVALUATION
In this section, we present our experimental setup and describe our evaluation criteria including the proposed AUPOC metric. Then we show the performance of Feature Grinding against seven contemporary backdoor attacks. We compare Feature Grinding with four other contemporary backdoor sanitation methods: Fine-Pruning (FP) (Liu et al., 2018a), Neural Cleanse (NC) (Wang et al., 2019), TABOR (Guo et al., 2019) and Neural Attention Distillation (NAD) (Li et al., 2021).
5.1 SETUP
In this section, we describe the setup for our experiments.
Hardware and Software. We perform all experiments on a local machine with a single A100 GPU with 40 GByte of VRAM and an AMD EPYC 7302 16-Core Processor. We conduct our experiments using PyTorch 1.9.0 and the trojanvision (Pang et al., 2020b) package version 1.0.10 that implements many of the surveyed backdoor attacks and defenses.
Datasets. We experiment with the following two standard image classification datasets.
• CIFAR-10 (Krizhevsky et al., 2009): Contains 50k training and 10k testing images with a dimension of 32x32 pixels and 10 class labels.
• ImageNet (Deng et al., 2009): Contains 1.23m training and 50k testing images with 1k class labels and we resize and center-crop the images to 256x256 pixels.
Network Architectures. We use standard training procedures for training a ResNet-18 (He et al., 2016) on CIFAR-10. The model is trained for 120 epochs with a learning rate initialized at 0.1. We decrease the learning rate by a factor of 10 when the model’s loss does not improve for two epochs and we use random cropping and cutout as data augmentation strategies. Our clean model achieves 96.06% test accuracy which is similar to the value reported in the original ResNet paper. For ImageNet, we rely on a pre-trained ResNet-50 that is made publicly available through the torchvision package2. The model has a test accuracy of 76.15%.
2https://pytorch.org/vision/stable/models.html
Training Time. We measure the total training time of a clean model on CIFAR-10 and ImageNet as a point of reference to interpret the efficiency of a backdoor defense. For CIFAR-10, we observe that a model can be trained in 34 minutes on our hardware. For ImageNet, we measure the runtime for a single epoch and estimate the model’s total training time to be 70 hours for 120 training epochs. Table 1 shows the runtime for each defense. We observe that Feature Grinding is more than 11× faster than the runtime optimized version of Neural Cleanse and 16× faster than TABOR. Evaluation Metrics. In summary, we empirically measure the following four metrics.
• CDA (Clean Data Accuracy): The accuracy of the model on an unseen test dataset. • ASR (Attack Success Rate): The rate at which the trojan model predicts the target label for
malicious inputs that contain the backdoor trigger. • ARR (Attack Recovery Rate): The rate at which a trojan model predicts the ground-truth
label for malicious inputs that contain the backdoor trigger. • Runtime: The runtime of a defense on our hardware.
AUPOC. We want to compare the effectiveness of two defenses by their CDA and ASR. There is an apparent correlation between both metrics, where a low CDA predicts a low ASR. For example, assume the defender randomly assigns all weights of the victim model (CDA is equivalent to random guessing), then the ASR is expected to be no higher than random guessing as well. For a fair comparison between defenses, we derive a combined value from the CDA and ASR that allows a pairwise comparison of backdoor defenses. We propose using the Area Under the Pareto-Optimal Curve (AUPOC) that we record by ablating over multiple sets of parameters for each defense.
For a given defense and attack, the AUPOC can be derived as follows. We identify a set of parameters for the defense to include in an ablation study. Then, for each set of parameters, we record the ASR and CDA as a single data point. We draw the pareto-frontier between points that are Paretooptimal, i.e., those points for which no other points exist that have a lower ASR and a higher CDA. To achieve closure of the curve, we add a point at an ASR of 1.0 with the highest CDA of the Paretofrontier. Let f(x) be the piece-wise function connecting all the points in the Pareto-frontier, then the AUPOC is simply the integral over that curve.
AUPOC = ∫ 1 0 f(x) dx
Note that a ’perfect’ AUPOC of 1.0 means that the defense achieves a CDA of 1.0 at an ASR of 0.0. This is unattainable if the clean model’s CDA is lower than 1.0, otherwise, the defense would improve the clean model. AUPOC is a relative measure to compare defenses under the same conditions. It is not useful as an absolute measure due to its sensitivity to the clean model’s CDA.
Parameter Ablation. Computing the AUPOC for each defense against each backdoor attack requires ablating over multiple sets of parameters for each defense. We rely on the parameters proposed in the author’s paper for all surveyed backdoor defenses. Since we are interested in comparing efficiency, we keep the runtime constant and ablate over parameters such as the learning rate, rather than the number of epochs. For Fine-Pruning, we ablate over the pruning rate ρ ∈ [0.05, 0.99]. We optimize triggers in Neural Cleanse and TABOR for ten epochs and ablate over the learning rate α ∈ [0.00002, 0.00001] used for unlearning a trigger pattern. For NAD, we train the student and teacher model for five epochs each and ablate over the learning rate α ∈ [0.0001, 0.001]. We use the same cutout data augmentation strategy as the authors. We run Feature Grinding for five epochs and ablate over the number of epochs spent in the grinding stage e ∈ {2, 4} and the three transformations proposed in Section 4.2. All results for CIFAR-10 are computed as the mean value
over three repetitions and a single repetition for ImageNet (due to higher computational demands). Since defenses on ImageNet do not require the entire training data, we give each defense access to a random subset of 100k clean training samples rather than the whole dataset of 1.23m records.
5.2 EFFECTIVENESS
Our goal is to measure the effectiveness of each backdoor attack against each defense. Figures 2 and 3 show the Pareto-optimal Curves and AUPOC metrics for all five defenses. The plot shows the ASR on the x-axis in relation to the CDA and we show only data points that are members of the Pareto frontier. Defenses with a higher AUPOC are more effective.
CIFAR-10: Figure 2 show the results for CIFAR-10. We observe that all defenses are effective at sanitizing a majority of the backdoors. Fine-Pruning has a relatively low AUPOC of less than 0.9 against the ABE, Latent Backdoor and Badnet attacks. For example, the best (i.e., lowest) ASR against the ABE backdoor is about 20 percent, which is relatively high compared to other sanitation methods such as Feature Grinding, which achieves a best ASR of 0 percent. The remaining defenses sanitize all backdoors effectively with an AUPOC of at least 0.937. Feature Grinding performs has the highest AUPOCs out of all defenses against every backdoor attack. NAD effectively sanitizes the backdoors but the models CDAs are about 1 percent lower compared to Feature Grinding.
ImageNet: Figure 3 show the results for ImageNet. We observe that it is more difficult to remove backdoors from ImageNet models than from CIFAR-10 models. Fine-Pruning achieves a low AUPOC of less than 0.5 against the Latent Backdoor and the Refool attack. Similarly, NAD has a low effectiveness against the Badnet and IMC attacks. For example, the best ASR that NAD achieves against IMC is still about 57 percent. Neural Cleanse and TABOR are completely ineffective against the Refool attack with an AUPOC of 0.021 and 0.030, which is expected because Refool applies the trigger pattern across the entire image. Feature Grinding achieves relatively high AUPOCs against each backdoor attack. We measure the lowest AUPOC of 0.585 against the IMC attack, meaning
that the best (i.e., lowest) ASR is about 22 percent. Overall, we observe that Feature Grinding has the best worst-case performance against all backdoor attacks compared to all other defenses.
5.3 ATTACK RECOVERY
In this experiment, we compare the attack recovery rates (ARR) for each defense. The results for CIFAR-10 and ImageNet are shown in Figures 2f and 3f. The bar charts show the defense on the xaxis and the ARR on the y-axis and the dashed horizontal line represents the clean model’s CDA. A high ARR indicates that the sanitized model predicts the correct, ground-truth class for a backdoored input and shows the defense’s effectiveness against a non-targeted attacker.
We observe that for both datasets, Feature Grinding has a high ARR against every attack. This supports the hypothesis stated in Section 4.1 that Feature Grinding successfully disentangles clean and backdoored samples. For CIFAR-10, Neural Cleanse and TABOR have the highest ARRs, followed by Feature Grinding and NAD. Fine-Pruning has a significantly lower ARR against multiple backdoors. On ImageNet, Feature Grinding has similar ARRs as Neural Cleanse and TABOR, except for Refool against which Feature Grinding has a significantly higher ARR than all other defenses.
6 CONCLUSION
We proposed Feature Grinding as an efficient backdoor sanitation method that can be performed using a low amount of computational resources. Our experiments on the image classification datasets CIFAR-10 and ImageNet have shown that Feature Grinding is highly efficient and can sanitize all seven surveyed backdoors. On ImageNet, Feature Grinding is approximately 11× faster than Neural Cleanse and about 16× faster than TABOR with a similar (or better) effectiveness. We propose the AUPOC metric to fairly evaluate the effectiveness of backdoor sanitation methods. Our evaluation shows that other fast sanitation methods from related work, such as Fine Pruning and Neural Attention Distillation, do not achieve the same AUPOC as Feature Grinding against all backdoor attacks. We hope that our work leads to further research into efficient backdoor sanitation methods. | 1. What is the focus and contribution of the paper on efficient backdoor sanitation?
2. What are the strengths of the proposed approach, particularly in terms of efficiency and effectiveness?
3. What are the weaknesses of the paper, especially regarding methodology and explanations?
4. Do you have any concerns or questions about the proposed Feature Grinding method?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes an efficient backdoor sanitation method that only takes 6% and 2% of training time on CIFAR10 and ImageNet, respectively. The sanitation method, called Feature Grinding aims to increase the distance between predicted features of clean and trojan samples from the same target class by applying a transformation with a random factor (random permutation, random rotation matrices, randomly sampled) which should keep in secret from the adversary.
Review
Strengths:
The idea is quite interesting and meaningful. Authors tried to sanitize models with a restricted resource as models become low-cost and computing resource becomes a critical problem.
The runtime decreased significantly yet the performance still remains roughly the same.
Authors proposed a new metric to measure the effectiveness of sanitation.
Weaknesses:
The methodology is a bit confusing.
In Sec 4.2, the authors only reset the model's head. I am wondering how to define the top of the feature extraction layer. Do different models have to reset a different number of layers? How many parameters have to reset? Some further explanations would be needed to explain it.
As the authors described in 4.2, the activation layer should be recorded for every data. I am curious about how much storage needs and whether it is affordable when the resource is restricted (which is the scenario author described), especially when sanitizing an ImageNet model.
An algorithm would be needed to better explain the methodology. Some details are missing (e.g. (1) why should we record the activation layer? I think it is for φM(x) but it is hard to understand without any further explanation. (2) The authors only mention that there are two stages; however, there's no explanation when to switch from one to another.
Three types of transformation are proposed in section 4; however, there's no experiment to explain which one is better and why.
I'm wondering the differences between transforming data and transforming features. Why transforming features works. In my opinion, the same method can be applied to data and get the results.
Some questions about the "Training Time" part of the evaluation:
Does the time of Feature Grinding include recording the activation layer?
As I knew, Feature Grinding is quite similar to NAD, both feed data to model and get activation layer(s), compute the losses, and fine-tune the model. What if we also record the feature map of NAD, will the runtime become the same as the time of Feature Grinding?
Furthermore, Feature Grinding reset some parameters and re-train these layers. How many epochs are needed to fine-tune the model? Is it faster than NAD? (In NAD, they claimed only few epochs are needed (less than 5 iterations), see Figure 9 in their appendix). As some parameters have to be reset in Feature Grinding, will it converge as fast as NAD?
In the AUPOC part, "There is an apparent correlation between both metrics, where a low CDA predicts a low ASR. For example, assume the defender randomly assigns all weights of the victim model (CDA is equivalent to random guessing), then the ASR is expected to be no higher than random guessing as well." The idea and the example are not convincible. Some further evidence is needed to support this assumption. We can always train a backdoored model with a strong attack and perform poorly, which conflicts with the above idea. |
ICLR | Title
Feature Grinding: Efficient Backdoor Sanitation in Deep Neural Networks
Abstract
Training deep neural networks (DNNs) is expensive and for this reason, third parties provide computational resources to train models. This makes DNNs vulnerable to backdoor attacks, in which the third party maliciously injects hidden functionalities in the model at training time. Removing a backdoor is challenging because although the defender has access to a clean, labeled dataset, they only have limited computational resources which are a fraction of the resources required to train a model from scratch. We propose Feature Grinding as an efficient, randomized backdoor sanitation technique against seven contemporary backdoors on CIFAR-10 and ImageNet. Feature Grinding requires at most six percent of the model’s training time on CIFAR-10 and at most two percent on ImageNet for sanitizing the surveyed backdoors. We compare Feature Grinding with five other sanitation methods and find that it is often the most effective at decreasing the backdoor’s success rate while preserving a high model accuracy. Our experiments include an ablation study over multiple parameters for each backdoor attack and sanitation technique to ensure a fair evaluation of all methods. Models suspected of containing a backdoor can be Feature Grinded using limited resources, which makes it a practical defense against backdoors that can be incorporated into any standard training procedure.
1 INTRODUCTION
Deep neural networks (DNNs) are large and complex. Many systems deployed in the real world make use of DNNs, such as surveillance systems (Singh et al., 2018; Wang et al., 2017), self-driving cars (Bojarski et al., 2016; Dosovitskiy et al., 2017), and biometric authentication systems (Boles & Rad, 2017; Liu et al., 2018b). Training DNNs is expensive and for this reason, computation is often outsourced to third parties or publicly available, pre-trained DNNs are re-used. This convenience comes at the cost of security, as these third parties may act maliciously.
A critical security threat are backdoor attacks. Thereby, the third-party embeds hidden functionality into a trojan model that forces targeted misclassifications when a trigger is present in an input. The model functions normally for inputs without a trigger. In practice, backdoors can lead to crashes in self-driving cars (Versprille, 2015), surveillance systems with blind spots (Cooper, 2014), and biometric authentication systems granting access to unauthorized persons (Lovisotto et al., 2020).
Backdoor attacks and defenses are a well-studied subject in the field of secure machine learning. Backdoor attacks have the goal to remain effective by achieving a high attack success rate, being hard to detect and robust against model modification and sanitation. Existing backdoor attacks assume various capabilities of an attacker, such as (i) poisoning the training dataset (Gu et al., 2017; Liu et al., 2017; 2020; Pang et al., 2020a; Shokri et al., 2020), (ii) modifying the model’s training code (Turner et al., 2018; Saha et al., 2020) or (iii) controlling the trojaned model’s architecture (and parameters) (Hong et al., 2021; Yao et al., 2019; Tang et al., 2020).
Backdoor defenses aim to decrease the attack’s success rate as much as possible to make the exploitation of a backdoor unreliable in practice. Thereby, the defender has access to a set of non-poisoned, clean data with ground-truth labels and is given a model suspected of containing a backdoor. Defenses can be deployed at the model’s inference or training stage. Defenses deployed at inference time either pre-process inputs with the goal to render triggers unrecognizable (Cohen et al., 2019; Meng & Chen, 2017), or they run a detection algorithm for every input to predict whether it contains
a trigger (Chen et al., 2018; Udeshi et al., 2019). Defenses deployed during training either preemptively sanitize a model suspected of containing a backdoor (Liu et al., 2018a; Li et al., 2021), or they run a backdoor detection algorithm before sanitation (Wang et al., 2019; Guo et al., 2019).
Existing backdoor defenses are evaluated with a focus on their effectiveness at sanitizing a backdoor while maintaining the model’s utility. We find that evaluating the defense’s efficiency is often neglected. For example, the runtime of Neural Cleanse (Wang et al., 2019) scales proportionally with the number of classes, which can be feasible for 10 classes, but becomes infeasible for 1k classes. In practice, the motivation of a defender to engage with third parties and rely on their pre-trained models or computational resources is often rooted in a lack of adequate resources in the defender’s control. Maintaining high-performance hardware may be more expensive than booking resources on-demand from third parties (Saiyeda & Mir, 2017). Pre-trained models are readily available online at low or no cost1. Defenses have to be executed in a trusted environment on resources available to the defender. The decision of whether to use a defense is bounded by the defender’s available resources. We believe that a simple, minimal defense leveraging as few computational resources as possible while remaining effective is missing from related work.
We propose Feature Grinding as an efficient backdoor sanitation method. Feature Grinding requires low computational resources compared with four state-of-the-art backdoor sanitation approaches and achieves similar effectiveness. Our defense acts as a regularization method on the penultimate layer, also referred to as the feature layer, of the trojan DNN. The goal is to apply a transformation that increases the distance between predicted features of clean and trojan samples from the same target class. Feature Grinding requires only access to clean samples. Our experiments on the image classification datasets CIFAR-10 (Krizhevsky et al., 2009) and ImageNet (Deng et al., 2009) demonstrate that these transformations can be (i) learned quickly by the trojan model and (ii) that they sanitize backdoors effectively.
1.1 CONTRIBUTIONS
In summary, we claim the following contributions.
1. We propose an efficient backdoor sanitation method called Feature Grinding that can be performed with limited computational resources.
2. Feature Grinding sanitizes trojan models from all seven surveyed backdoor attacks.
3. We conduct an extensive evaluation on the image classification datasets CIFAR-10 and ImageNet, comparing Feature Grinding with four other backdoor sanitation methods.
4. We propose a metric to compare sanitation methods called Area Under the Pareto-Optimal Curve (AUPOC). AUPOC shows the trade-off between the clean data accuracy (CDA) and the attack’s success rate (ASR). We ablate over sets of parameters for each defense and compute the AUC for the best (i.e., Pareto-optimal) parameters given the CDA and ASR.
2 RELATED WORK
2.1 BACKDOORS ATTACK
A deep neural network (DNN) contains a backdoor if an attacker can add a secret backdoor trigger to inputs presented to a victim model, which causes targeted misclassifications. These triggers are typically hidden (e.g., small or imperceptible) and are only known to the attacker. Existing backdoor attacks assume different capabilities of an attacker, which can be summarized as follows.
• Poisoning: The attacker can inject poisoned samples into the training dataset. In clean label attacks, the attacker can only poison inputs, but cannot control their target labels.
• Training Code: The attacker can modify the training code (e.g., the model’s loss function).
• Model Architecture: The attacker has control over the victim model’s architecture.
1https://modelzoo.co
We study seven contemporary backdoor attacks from related work. In this paper, we focus on attacks that assume the attacker can (i) poison the training data or (ii) modify the training code. The seven surveyed backdoor attacks from related work can be summarized as follows.
Badnet (Gu et al., 2017) assumes that an attacker can poison the training data, but not modify the training code. The authors propose injecting samples using static trigger patterns such as a white square with poisoned labels. Clean-Label (Turner et al., 2018) is the first poisoning attack that does not require changing the poisoned input’s target labels. This makes it more difficult for the defender to remove poisoned inputs from their dataset before training the model. They stamp a nearly opaque trigger on inputs from the target class and adversarially perturb them to impede the victim model’s ability to learn from the image’s content and instead learn to associate the trigger with the target class. Refool (Liu et al., 2020) is a clean label attack that uses natural reflections as a trigger, blending other images with images from the target class, to embed their backdoor.
TrojanNN (Liu et al., 2017) requires control over the model’s training code for fine-tuning the victim model. They generate a modified trigger pattern optimized for exciting a subset of the victim model’s internal neurons and then fine-tune the model to embed a backdoor with these triggers. The Latent backdoor (Yao et al., 2019) has been designed to withstand the transfer learning process, in which a pre-trained, trojan teacher is fine-tuned for a related, but different task. They assume full control over the teacher’s training process and add a loss term that encourages embedding the backdoor in the lower layers of the teacher model. The Adversarial Backdoor Embedding (ABE) method (Shokri et al., 2020) modifies the model’s loss function during training to minimize the distance between poisoned and clean samples in the victim model’s feature layer. The Input-Model Co-Optimization (IMC) method (Pang et al., 2020a) optimizes both the victim model and the trigger used to implement the backdoor. The authors formulate an objective to craft a backdoor in conjunction with a trojan model and use alternating optimization of the model and inputs.
2.2 BACKDOOR DEFENSES
The goal of a backdoor defense is to prevent an attacker from exploiting their backdoor by either suppressing it through input preprocessing or by sanitizing the model. Following the categorization by Pang et al. (2020b), there are four categories of backdoor defenses.
• Input Reformation: Each input to the victim model is pre-processed with the goal to disable a trigger by rendering it unrecognizable to the victim model.
• Input Filtering: A binary classification is made for each input to the victim model whether it contains a trigger. Inputs that are predicted to contain a trigger can be discarded.
• Model Sanitation: The victim model is modified with the goal of losing the backdoor’s functionality while preserving the task’s functionality. Sanitation is done preemptively without prior detection of a backdoor.
• Model Inspection: An extension of model sanitation methods that first discover and reverse-engineer backdoors before sanitizing the model.
We survey four model sanitation or inspection methods from related work. All methods assume access to (i) the trojan model’s parameters and (ii) a subset of clean training data, but not to the backdoor’s trigger patterns. Fine-Pruning (Liu et al., 2018a) iteratively prunes dormant neurons that are suspected to implement a backdoor. Neural Attention Distillation (NAD) (Li et al., 2021) is a method to quickly distill a student using the teacher’s attention maps without retraining the student from scratch. Neural Cleanse (Wang et al., 2019) and TABOR (Guo et al., 2019) are model inspection methods that reverse-engineer a backdoor by an optimization process. The objective is to optimize inputs for a static trigger pattern that adversarially modifies the trojan model’s prediction of any input towards a target class. Since the target class is unknown, both methods iterate through each candidate target class and generate at least one trigger per class. TABOR is an extension of Neural Cleanse that specifically allows reverse-engineering large and complex triggers. The authors propose two methods to remove reverse-engineered triggers from a model. The first approach uses unlearning in which the model is fine-tuned with the trigger and the ground-truth label. The second approach uses pruning, in which those internal neurons of the trojan model are removed with the highest activation for the reverse-engineered trigger pattern.
3 THREAT MODEL
Training a DNN model is expensive and hence computation is often outsourced to third parties. As input, the defender specifies (i) the training dataset including ground-truth labels, (ii) the training code and (iii) the model architecture. During training, the attacker (i) injects poisoned data into the training dataset and (ii) modifies the training code, but they cannot modify the trojan model’s architecture. After training, the attacker sends the trojan model to the defender. The defender’s objective is to sanitize any backdoors present in the model using limited computational resources.
The defender’s primary goal is to sanitize the backdoor against a targeted attacker. A targeted attacker wins if the trojan model has a high success rate of predicting a trojan input with the target label. Given a trigger δ, a clean dataset D, a target label t and a model M , the attacker wins if the following condition holds for > 0.
Pr x∈D
[M(x+ δ) = t] > 1−
The defender’s secondary goal is to defend against a non-targeted attacker. A non-targeted attacker wins when the model has a high probability of misclassifying trojan inputs as any target class other than the ground-truth. Defending against non-targeted attacks is more challenging for the defender.
We believe that a defender with limited computational resources is a realistic and practically relevant assumption for multiple reasons. Prices for third party computational resources are decreasing and in many scenarios, it becomes more affordable to occasionally book external resources rather than continually maintaining dedicated hardware. Professional third party hardware often features hightiered GPUs that allow for rapid development due to low training times. A defender may have the capabilities to run a few training steps themselves on their hardware. However, the aforementioned higher monetary costs and runtimes may deter the defender from performing backdoor sanitation and make them more inclined to accept the risk of a backdoor. The most effective backdoor sanitation approaches are useless if their runtime exceeds the defender’s limits. Efficient backdoor sanitation strategies try to minimize the runtime, to allow acting on even a weak suspicion of a backdoor present in their model. We do not put hard constraints on the defender’s computational power, but instead, compare sanitation methods by their execution time relative to the model’s training time.
In summary, our attacker has access to the entire training dataset and its ground-truth labels, the model’s training code, and significant computational resources. However, they cannot modify the model’s architecture without the defender taking notice. Our defender has access to the entire training dataset and the ground-truth labels, the trojan model, but only limited computational resources.
4 FEATURE GRINDING
4.1 MOTIVATION
A DNN classifier is composed of a sequence of functions σ(fm(fm−1(..f1(x)))), where fi represents the i-th hidden layer, σ(·) is the softmax activation and x ∈ Rn represents the input. Feature grinding is applied to the penultimate layer fm−1(·), of the DNN model, which we refer to as the model’s feature extractor. The feature extractor of a model M is commonly referred to as φM (·). This layer is a compressed representation of the input and the features form a high-dimensional feature space. It has been observed that distances between features are semantically meaningful for well-trained models with respect to the task (Zhang et al., 2018).
Inputs belonging to the same class typically form dense clusters in the feature space. A backdoor’s objective is to map a trojan input to a feature vector located within the cluster composing the target class. The goal of a backdoor sanitation method is to disentangle the trojan samples from the cluster of the target class by increasing the separation between the clusters of clean and trojan samples. However, the defender does not know the trigger pattern. There are two conceptual approaches to achieve this disentanglement. Either, the trojan examples are reverse-engineered and then sanitized by projecting them to a different area in the feature space, or all clean examples are moved to a different region in the feature space.
Difficulties with the first approach (reverse-engineering) are that (i) formalizing complex triggers spanning multiple regions of the image may be difficult and (ii) it is computationally expensive to
optimize for an unknown trigger and target label. Once the trigger has been reverse-engineered, an advantage is that the trojan model can be updated with minimal side-effects through unlearning or pruning with the expectation that the model retains a high test accuracy. This idea motivates model inspection methods such as Neural Cleanse (Wang et al., 2019) and TABOR (Guo et al., 2019).
A different methodology, used in Feature Grinding, is to modify the trojaned model using only clean samples without attempting to reverse-engineer trigger patterns. This method is expected to have greater side-effects to the model, but it requires significantly fewer computational resources. For example, fine-pruning (Liu et al., 2018a) updates the trojaned model purely on its actions on clean data. It prunes neurons that are least active when clean data is passed through the model. The goal of Feature Grinding is to relocate all clean samples to a different region in the feature space, as illustrated in Figure 1. The hypothesis is that by moving clean samples, the trojaned samples retain their feature representation which disentangles them from the clean target samples.
4.2 GRINDING AND RESTORATION
The goal of Feature Grinding is to modify the model’s feature extractor so that the updated, grinded model’s feature extractor predicts transformed features. First, the defender resets the parameters of the model’s head, which refers to the weights and biases of all layers that are on top of the model’s feature extraction layer. Then, the defender records all feature activations of the training dataset and perturbs them by applying a static, randomized transformation function before fine-tuning the victim model on these transformed features.
Feature Grinding passes through two phases: grinding and restoration. In the grinding phase, the victim model is fine-tuned to predict the transformed features given clean samples as inputs. The model may lose some of its test accuracy during the grinding phase. As compensation, we use a restoration phase that focuses on regaining the model’s test accuracy. Both phases can be incorporated into any standard training procedure by altering the model’s loss function.
Assume the defender receives a trojan model M and wants to derive a model M ′ that has been sanitized of any backdoor present in M . When using Feature Grinding, the defender adds a term Lf to the model’s loss during the grinding and restoration phases. Lf can be described as follows, given some transformation T (·), the feature extractor φM (·) and some input x for model M .
Lf (x) = ‖φM ′(x)− T (φM (x))‖
The total loss L for both phases is a sum of the task loss Lt and the Feature Grinding loss Lf . We use a parameter α ∈ R to trade-off both loss terms. In our experiments, we use α = 0.8 during the grinding phase and α = 0.2 during the restoration phase.
L(x) = αLf (x) + (1− α)Lt(x)
4.3 TRANSFORMATION
The transformation function is used to perturb the feature space of the victim model. Choosing a transformation function influences the efficiency of Feature Grinding. An effective transformation ensures that the success rate of the backdoor is decreased as much as possible (e.g., by retraining the entire model). An efficient transformation adds an additional constraint by trying to minimize the resources spent, i.e., the number of steps needed for the victim model to learn the transformed feature space, while remaining effective at sanitizing the backdoor.
Proposed Transformations. We design transformation functions that follow our intuition on achieving high efficiency. We experiment with the following transformation functions.
1. Permute (Tp): The perturbation consists of a random permutation of all features. The permutation is sampled randomly once and then applied to all features.
2. Rotate (Tr): The features are rotated in the n-dimensional space by sampling random rotation matrices using the approach of Stewart (Stewart, 1980).
3. Rotate-2d (Tr2)): The features are rotated in a randomly sampled, two dimensional plane by an angle θ.
All surveyed transformations are (i) sampled randomly once and (ii) they are automorphisms. Randomization is important because the applied transformation must be kept secret from the adversary to avoid them from adapting their backdoor to the transformation. The defender should post-hoc dispose of their records about the applied transformation. We believe that transformations which are automorphisms are advantageous because they preserve the structure of the high-dimensional space which shortens the restoration phase.
5 EVALUATION
In this section, we present our experimental setup and describe our evaluation criteria including the proposed AUPOC metric. Then we show the performance of Feature Grinding against seven contemporary backdoor attacks. We compare Feature Grinding with four other contemporary backdoor sanitation methods: Fine-Pruning (FP) (Liu et al., 2018a), Neural Cleanse (NC) (Wang et al., 2019), TABOR (Guo et al., 2019) and Neural Attention Distillation (NAD) (Li et al., 2021).
5.1 SETUP
In this section, we describe the setup for our experiments.
Hardware and Software. We perform all experiments on a local machine with a single A100 GPU with 40 GByte of VRAM and an AMD EPYC 7302 16-Core Processor. We conduct our experiments using PyTorch 1.9.0 and the trojanvision (Pang et al., 2020b) package version 1.0.10 that implements many of the surveyed backdoor attacks and defenses.
Datasets. We experiment with the following two standard image classification datasets.
• CIFAR-10 (Krizhevsky et al., 2009): Contains 50k training and 10k testing images with a dimension of 32x32 pixels and 10 class labels.
• ImageNet (Deng et al., 2009): Contains 1.23m training and 50k testing images with 1k class labels and we resize and center-crop the images to 256x256 pixels.
Network Architectures. We use standard training procedures for training a ResNet-18 (He et al., 2016) on CIFAR-10. The model is trained for 120 epochs with a learning rate initialized at 0.1. We decrease the learning rate by a factor of 10 when the model’s loss does not improve for two epochs and we use random cropping and cutout as data augmentation strategies. Our clean model achieves 96.06% test accuracy which is similar to the value reported in the original ResNet paper. For ImageNet, we rely on a pre-trained ResNet-50 that is made publicly available through the torchvision package2. The model has a test accuracy of 76.15%.
2https://pytorch.org/vision/stable/models.html
Training Time. We measure the total training time of a clean model on CIFAR-10 and ImageNet as a point of reference to interpret the efficiency of a backdoor defense. For CIFAR-10, we observe that a model can be trained in 34 minutes on our hardware. For ImageNet, we measure the runtime for a single epoch and estimate the model’s total training time to be 70 hours for 120 training epochs. Table 1 shows the runtime for each defense. We observe that Feature Grinding is more than 11× faster than the runtime optimized version of Neural Cleanse and 16× faster than TABOR. Evaluation Metrics. In summary, we empirically measure the following four metrics.
• CDA (Clean Data Accuracy): The accuracy of the model on an unseen test dataset. • ASR (Attack Success Rate): The rate at which the trojan model predicts the target label for
malicious inputs that contain the backdoor trigger. • ARR (Attack Recovery Rate): The rate at which a trojan model predicts the ground-truth
label for malicious inputs that contain the backdoor trigger. • Runtime: The runtime of a defense on our hardware.
AUPOC. We want to compare the effectiveness of two defenses by their CDA and ASR. There is an apparent correlation between both metrics, where a low CDA predicts a low ASR. For example, assume the defender randomly assigns all weights of the victim model (CDA is equivalent to random guessing), then the ASR is expected to be no higher than random guessing as well. For a fair comparison between defenses, we derive a combined value from the CDA and ASR that allows a pairwise comparison of backdoor defenses. We propose using the Area Under the Pareto-Optimal Curve (AUPOC) that we record by ablating over multiple sets of parameters for each defense.
For a given defense and attack, the AUPOC can be derived as follows. We identify a set of parameters for the defense to include in an ablation study. Then, for each set of parameters, we record the ASR and CDA as a single data point. We draw the pareto-frontier between points that are Paretooptimal, i.e., those points for which no other points exist that have a lower ASR and a higher CDA. To achieve closure of the curve, we add a point at an ASR of 1.0 with the highest CDA of the Paretofrontier. Let f(x) be the piece-wise function connecting all the points in the Pareto-frontier, then the AUPOC is simply the integral over that curve.
AUPOC = ∫ 1 0 f(x) dx
Note that a ’perfect’ AUPOC of 1.0 means that the defense achieves a CDA of 1.0 at an ASR of 0.0. This is unattainable if the clean model’s CDA is lower than 1.0, otherwise, the defense would improve the clean model. AUPOC is a relative measure to compare defenses under the same conditions. It is not useful as an absolute measure due to its sensitivity to the clean model’s CDA.
Parameter Ablation. Computing the AUPOC for each defense against each backdoor attack requires ablating over multiple sets of parameters for each defense. We rely on the parameters proposed in the author’s paper for all surveyed backdoor defenses. Since we are interested in comparing efficiency, we keep the runtime constant and ablate over parameters such as the learning rate, rather than the number of epochs. For Fine-Pruning, we ablate over the pruning rate ρ ∈ [0.05, 0.99]. We optimize triggers in Neural Cleanse and TABOR for ten epochs and ablate over the learning rate α ∈ [0.00002, 0.00001] used for unlearning a trigger pattern. For NAD, we train the student and teacher model for five epochs each and ablate over the learning rate α ∈ [0.0001, 0.001]. We use the same cutout data augmentation strategy as the authors. We run Feature Grinding for five epochs and ablate over the number of epochs spent in the grinding stage e ∈ {2, 4} and the three transformations proposed in Section 4.2. All results for CIFAR-10 are computed as the mean value
over three repetitions and a single repetition for ImageNet (due to higher computational demands). Since defenses on ImageNet do not require the entire training data, we give each defense access to a random subset of 100k clean training samples rather than the whole dataset of 1.23m records.
5.2 EFFECTIVENESS
Our goal is to measure the effectiveness of each backdoor attack against each defense. Figures 2 and 3 show the Pareto-optimal Curves and AUPOC metrics for all five defenses. The plot shows the ASR on the x-axis in relation to the CDA and we show only data points that are members of the Pareto frontier. Defenses with a higher AUPOC are more effective.
CIFAR-10: Figure 2 show the results for CIFAR-10. We observe that all defenses are effective at sanitizing a majority of the backdoors. Fine-Pruning has a relatively low AUPOC of less than 0.9 against the ABE, Latent Backdoor and Badnet attacks. For example, the best (i.e., lowest) ASR against the ABE backdoor is about 20 percent, which is relatively high compared to other sanitation methods such as Feature Grinding, which achieves a best ASR of 0 percent. The remaining defenses sanitize all backdoors effectively with an AUPOC of at least 0.937. Feature Grinding performs has the highest AUPOCs out of all defenses against every backdoor attack. NAD effectively sanitizes the backdoors but the models CDAs are about 1 percent lower compared to Feature Grinding.
ImageNet: Figure 3 show the results for ImageNet. We observe that it is more difficult to remove backdoors from ImageNet models than from CIFAR-10 models. Fine-Pruning achieves a low AUPOC of less than 0.5 against the Latent Backdoor and the Refool attack. Similarly, NAD has a low effectiveness against the Badnet and IMC attacks. For example, the best ASR that NAD achieves against IMC is still about 57 percent. Neural Cleanse and TABOR are completely ineffective against the Refool attack with an AUPOC of 0.021 and 0.030, which is expected because Refool applies the trigger pattern across the entire image. Feature Grinding achieves relatively high AUPOCs against each backdoor attack. We measure the lowest AUPOC of 0.585 against the IMC attack, meaning
that the best (i.e., lowest) ASR is about 22 percent. Overall, we observe that Feature Grinding has the best worst-case performance against all backdoor attacks compared to all other defenses.
5.3 ATTACK RECOVERY
In this experiment, we compare the attack recovery rates (ARR) for each defense. The results for CIFAR-10 and ImageNet are shown in Figures 2f and 3f. The bar charts show the defense on the xaxis and the ARR on the y-axis and the dashed horizontal line represents the clean model’s CDA. A high ARR indicates that the sanitized model predicts the correct, ground-truth class for a backdoored input and shows the defense’s effectiveness against a non-targeted attacker.
We observe that for both datasets, Feature Grinding has a high ARR against every attack. This supports the hypothesis stated in Section 4.1 that Feature Grinding successfully disentangles clean and backdoored samples. For CIFAR-10, Neural Cleanse and TABOR have the highest ARRs, followed by Feature Grinding and NAD. Fine-Pruning has a significantly lower ARR against multiple backdoors. On ImageNet, Feature Grinding has similar ARRs as Neural Cleanse and TABOR, except for Refool against which Feature Grinding has a significantly higher ARR than all other defenses.
6 CONCLUSION
We proposed Feature Grinding as an efficient backdoor sanitation method that can be performed using a low amount of computational resources. Our experiments on the image classification datasets CIFAR-10 and ImageNet have shown that Feature Grinding is highly efficient and can sanitize all seven surveyed backdoors. On ImageNet, Feature Grinding is approximately 11× faster than Neural Cleanse and about 16× faster than TABOR with a similar (or better) effectiveness. We propose the AUPOC metric to fairly evaluate the effectiveness of backdoor sanitation methods. Our evaluation shows that other fast sanitation methods from related work, such as Fine Pruning and Neural Attention Distillation, do not achieve the same AUPOC as Feature Grinding against all backdoor attacks. We hope that our work leads to further research into efficient backdoor sanitation methods. | 1. What is the focus and contribution of the paper regarding backdoor defense?
2. What are the strengths and weaknesses of the proposed "feature-grinding" method?
3. How does the reviewer assess the effectiveness of the proposed method compared to other approaches like Neural Cleanse?
4. What are the concerns regarding the theoretical connection and empirical evaluation of the proposed approach?
5. Are there any limitations in discussing and experimenting with recent types of attacks?
6. How can the visualization of figures 1 and 2 be improved? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes a "feature-grinding" method to defense against backdoor attacks. The idea is to increase the distance between the clean and backdoor samples of the same target class in the latent space by assuming access to the clean samples. The paper also proposes a new metric, called AUPOC, to assess the effectiveness of the defense methods. Experimental results show some effectiveness of the proposed feature-grinding method.
Review
The paper is generally easy to follow and the motivations behind the proposed method are ok. However, I think that there are a few important limitations of the paper:
The main thesis of the paper is that given limited computational resources, training is outsourced. Thus, a backdoor defense that relies on less computational resource is generally better. For such a reason, Feature Grinding is proposed. However, the discussions and experimental results in the paper are not providing convincing evidence of this proposal. Below are some reasons:
Assuming access to the entire training (end of Section 3) is a pretty strong assumption in the backdoor domain. For example, Neural Cleanse assumes a subset of the clean, test data. I also argue that a likely reason for outsourcing the training process is because of the large amount of often proprietary training data. Thus, this assumption is impractical.
The experiment w.r.t runtime for Neural Cleanse (NC) is not convincing. While it is true that NC trains a different model for each class, thus higher number of classes requires a longer training time, each training w.r.t one class can be run in parallel and separate from the other classes. Furthermore, the proposed method will update all model's parameters (thus requiring storing gradient during training, and this could be expensive for large models), while NC does not, which also represents an advantage of NC. I would argue that for larger models, NC will not have a problem while the proposed model will need more-memory GPUs.
In general, it is hard to see the advantages of the proposed method in terms of computation given the current motivations and empirical evaluation.
I am also concerned about the theoretical connection between the proposed approach and the discussion in Section 4.1. I'm not entirely sure how the feature grinding will achieve the desired effect, either from a theoretical or empirical perspective. Furthermore, there are not enough details about the proposed 2 phases. What are the objective of the 2 phases? How long do you train the 2 phases?
Some recent types of attacks are not investigated, for example, Doan et al. ICCV 2021 or Nguyen et al. ICLR 2021. These methods have a different attack mechanisms than existing attack methods and should also be considered.
Minor: I also find it difficult to view and understand Figure 1 and 2.
Doan et al. ICCV 2021. LIRA: Learnable, Imperceptible and Robust Backdoor Attacks
Nguyen et al. ICLR 2021. WaNet -- Imperceptible Warping-based Backdoor Attack |
ICLR | Title
Feature Grinding: Efficient Backdoor Sanitation in Deep Neural Networks
Abstract
Training deep neural networks (DNNs) is expensive and for this reason, third parties provide computational resources to train models. This makes DNNs vulnerable to backdoor attacks, in which the third party maliciously injects hidden functionalities in the model at training time. Removing a backdoor is challenging because although the defender has access to a clean, labeled dataset, they only have limited computational resources which are a fraction of the resources required to train a model from scratch. We propose Feature Grinding as an efficient, randomized backdoor sanitation technique against seven contemporary backdoors on CIFAR-10 and ImageNet. Feature Grinding requires at most six percent of the model’s training time on CIFAR-10 and at most two percent on ImageNet for sanitizing the surveyed backdoors. We compare Feature Grinding with five other sanitation methods and find that it is often the most effective at decreasing the backdoor’s success rate while preserving a high model accuracy. Our experiments include an ablation study over multiple parameters for each backdoor attack and sanitation technique to ensure a fair evaluation of all methods. Models suspected of containing a backdoor can be Feature Grinded using limited resources, which makes it a practical defense against backdoors that can be incorporated into any standard training procedure.
1 INTRODUCTION
Deep neural networks (DNNs) are large and complex. Many systems deployed in the real world make use of DNNs, such as surveillance systems (Singh et al., 2018; Wang et al., 2017), self-driving cars (Bojarski et al., 2016; Dosovitskiy et al., 2017), and biometric authentication systems (Boles & Rad, 2017; Liu et al., 2018b). Training DNNs is expensive and for this reason, computation is often outsourced to third parties or publicly available, pre-trained DNNs are re-used. This convenience comes at the cost of security, as these third parties may act maliciously.
A critical security threat are backdoor attacks. Thereby, the third-party embeds hidden functionality into a trojan model that forces targeted misclassifications when a trigger is present in an input. The model functions normally for inputs without a trigger. In practice, backdoors can lead to crashes in self-driving cars (Versprille, 2015), surveillance systems with blind spots (Cooper, 2014), and biometric authentication systems granting access to unauthorized persons (Lovisotto et al., 2020).
Backdoor attacks and defenses are a well-studied subject in the field of secure machine learning. Backdoor attacks have the goal to remain effective by achieving a high attack success rate, being hard to detect and robust against model modification and sanitation. Existing backdoor attacks assume various capabilities of an attacker, such as (i) poisoning the training dataset (Gu et al., 2017; Liu et al., 2017; 2020; Pang et al., 2020a; Shokri et al., 2020), (ii) modifying the model’s training code (Turner et al., 2018; Saha et al., 2020) or (iii) controlling the trojaned model’s architecture (and parameters) (Hong et al., 2021; Yao et al., 2019; Tang et al., 2020).
Backdoor defenses aim to decrease the attack’s success rate as much as possible to make the exploitation of a backdoor unreliable in practice. Thereby, the defender has access to a set of non-poisoned, clean data with ground-truth labels and is given a model suspected of containing a backdoor. Defenses can be deployed at the model’s inference or training stage. Defenses deployed at inference time either pre-process inputs with the goal to render triggers unrecognizable (Cohen et al., 2019; Meng & Chen, 2017), or they run a detection algorithm for every input to predict whether it contains
a trigger (Chen et al., 2018; Udeshi et al., 2019). Defenses deployed during training either preemptively sanitize a model suspected of containing a backdoor (Liu et al., 2018a; Li et al., 2021), or they run a backdoor detection algorithm before sanitation (Wang et al., 2019; Guo et al., 2019).
Existing backdoor defenses are evaluated with a focus on their effectiveness at sanitizing a backdoor while maintaining the model’s utility. We find that evaluating the defense’s efficiency is often neglected. For example, the runtime of Neural Cleanse (Wang et al., 2019) scales proportionally with the number of classes, which can be feasible for 10 classes, but becomes infeasible for 1k classes. In practice, the motivation of a defender to engage with third parties and rely on their pre-trained models or computational resources is often rooted in a lack of adequate resources in the defender’s control. Maintaining high-performance hardware may be more expensive than booking resources on-demand from third parties (Saiyeda & Mir, 2017). Pre-trained models are readily available online at low or no cost1. Defenses have to be executed in a trusted environment on resources available to the defender. The decision of whether to use a defense is bounded by the defender’s available resources. We believe that a simple, minimal defense leveraging as few computational resources as possible while remaining effective is missing from related work.
We propose Feature Grinding as an efficient backdoor sanitation method. Feature Grinding requires low computational resources compared with four state-of-the-art backdoor sanitation approaches and achieves similar effectiveness. Our defense acts as a regularization method on the penultimate layer, also referred to as the feature layer, of the trojan DNN. The goal is to apply a transformation that increases the distance between predicted features of clean and trojan samples from the same target class. Feature Grinding requires only access to clean samples. Our experiments on the image classification datasets CIFAR-10 (Krizhevsky et al., 2009) and ImageNet (Deng et al., 2009) demonstrate that these transformations can be (i) learned quickly by the trojan model and (ii) that they sanitize backdoors effectively.
1.1 CONTRIBUTIONS
In summary, we claim the following contributions.
1. We propose an efficient backdoor sanitation method called Feature Grinding that can be performed with limited computational resources.
2. Feature Grinding sanitizes trojan models from all seven surveyed backdoor attacks.
3. We conduct an extensive evaluation on the image classification datasets CIFAR-10 and ImageNet, comparing Feature Grinding with four other backdoor sanitation methods.
4. We propose a metric to compare sanitation methods called Area Under the Pareto-Optimal Curve (AUPOC). AUPOC shows the trade-off between the clean data accuracy (CDA) and the attack’s success rate (ASR). We ablate over sets of parameters for each defense and compute the AUC for the best (i.e., Pareto-optimal) parameters given the CDA and ASR.
2 RELATED WORK
2.1 BACKDOORS ATTACK
A deep neural network (DNN) contains a backdoor if an attacker can add a secret backdoor trigger to inputs presented to a victim model, which causes targeted misclassifications. These triggers are typically hidden (e.g., small or imperceptible) and are only known to the attacker. Existing backdoor attacks assume different capabilities of an attacker, which can be summarized as follows.
• Poisoning: The attacker can inject poisoned samples into the training dataset. In clean label attacks, the attacker can only poison inputs, but cannot control their target labels.
• Training Code: The attacker can modify the training code (e.g., the model’s loss function).
• Model Architecture: The attacker has control over the victim model’s architecture.
1https://modelzoo.co
We study seven contemporary backdoor attacks from related work. In this paper, we focus on attacks that assume the attacker can (i) poison the training data or (ii) modify the training code. The seven surveyed backdoor attacks from related work can be summarized as follows.
Badnet (Gu et al., 2017) assumes that an attacker can poison the training data, but not modify the training code. The authors propose injecting samples using static trigger patterns such as a white square with poisoned labels. Clean-Label (Turner et al., 2018) is the first poisoning attack that does not require changing the poisoned input’s target labels. This makes it more difficult for the defender to remove poisoned inputs from their dataset before training the model. They stamp a nearly opaque trigger on inputs from the target class and adversarially perturb them to impede the victim model’s ability to learn from the image’s content and instead learn to associate the trigger with the target class. Refool (Liu et al., 2020) is a clean label attack that uses natural reflections as a trigger, blending other images with images from the target class, to embed their backdoor.
TrojanNN (Liu et al., 2017) requires control over the model’s training code for fine-tuning the victim model. They generate a modified trigger pattern optimized for exciting a subset of the victim model’s internal neurons and then fine-tune the model to embed a backdoor with these triggers. The Latent backdoor (Yao et al., 2019) has been designed to withstand the transfer learning process, in which a pre-trained, trojan teacher is fine-tuned for a related, but different task. They assume full control over the teacher’s training process and add a loss term that encourages embedding the backdoor in the lower layers of the teacher model. The Adversarial Backdoor Embedding (ABE) method (Shokri et al., 2020) modifies the model’s loss function during training to minimize the distance between poisoned and clean samples in the victim model’s feature layer. The Input-Model Co-Optimization (IMC) method (Pang et al., 2020a) optimizes both the victim model and the trigger used to implement the backdoor. The authors formulate an objective to craft a backdoor in conjunction with a trojan model and use alternating optimization of the model and inputs.
2.2 BACKDOOR DEFENSES
The goal of a backdoor defense is to prevent an attacker from exploiting their backdoor by either suppressing it through input preprocessing or by sanitizing the model. Following the categorization by Pang et al. (2020b), there are four categories of backdoor defenses.
• Input Reformation: Each input to the victim model is pre-processed with the goal to disable a trigger by rendering it unrecognizable to the victim model.
• Input Filtering: A binary classification is made for each input to the victim model whether it contains a trigger. Inputs that are predicted to contain a trigger can be discarded.
• Model Sanitation: The victim model is modified with the goal of losing the backdoor’s functionality while preserving the task’s functionality. Sanitation is done preemptively without prior detection of a backdoor.
• Model Inspection: An extension of model sanitation methods that first discover and reverse-engineer backdoors before sanitizing the model.
We survey four model sanitation or inspection methods from related work. All methods assume access to (i) the trojan model’s parameters and (ii) a subset of clean training data, but not to the backdoor’s trigger patterns. Fine-Pruning (Liu et al., 2018a) iteratively prunes dormant neurons that are suspected to implement a backdoor. Neural Attention Distillation (NAD) (Li et al., 2021) is a method to quickly distill a student using the teacher’s attention maps without retraining the student from scratch. Neural Cleanse (Wang et al., 2019) and TABOR (Guo et al., 2019) are model inspection methods that reverse-engineer a backdoor by an optimization process. The objective is to optimize inputs for a static trigger pattern that adversarially modifies the trojan model’s prediction of any input towards a target class. Since the target class is unknown, both methods iterate through each candidate target class and generate at least one trigger per class. TABOR is an extension of Neural Cleanse that specifically allows reverse-engineering large and complex triggers. The authors propose two methods to remove reverse-engineered triggers from a model. The first approach uses unlearning in which the model is fine-tuned with the trigger and the ground-truth label. The second approach uses pruning, in which those internal neurons of the trojan model are removed with the highest activation for the reverse-engineered trigger pattern.
3 THREAT MODEL
Training a DNN model is expensive and hence computation is often outsourced to third parties. As input, the defender specifies (i) the training dataset including ground-truth labels, (ii) the training code and (iii) the model architecture. During training, the attacker (i) injects poisoned data into the training dataset and (ii) modifies the training code, but they cannot modify the trojan model’s architecture. After training, the attacker sends the trojan model to the defender. The defender’s objective is to sanitize any backdoors present in the model using limited computational resources.
The defender’s primary goal is to sanitize the backdoor against a targeted attacker. A targeted attacker wins if the trojan model has a high success rate of predicting a trojan input with the target label. Given a trigger δ, a clean dataset D, a target label t and a model M , the attacker wins if the following condition holds for > 0.
Pr x∈D
[M(x+ δ) = t] > 1−
The defender’s secondary goal is to defend against a non-targeted attacker. A non-targeted attacker wins when the model has a high probability of misclassifying trojan inputs as any target class other than the ground-truth. Defending against non-targeted attacks is more challenging for the defender.
We believe that a defender with limited computational resources is a realistic and practically relevant assumption for multiple reasons. Prices for third party computational resources are decreasing and in many scenarios, it becomes more affordable to occasionally book external resources rather than continually maintaining dedicated hardware. Professional third party hardware often features hightiered GPUs that allow for rapid development due to low training times. A defender may have the capabilities to run a few training steps themselves on their hardware. However, the aforementioned higher monetary costs and runtimes may deter the defender from performing backdoor sanitation and make them more inclined to accept the risk of a backdoor. The most effective backdoor sanitation approaches are useless if their runtime exceeds the defender’s limits. Efficient backdoor sanitation strategies try to minimize the runtime, to allow acting on even a weak suspicion of a backdoor present in their model. We do not put hard constraints on the defender’s computational power, but instead, compare sanitation methods by their execution time relative to the model’s training time.
In summary, our attacker has access to the entire training dataset and its ground-truth labels, the model’s training code, and significant computational resources. However, they cannot modify the model’s architecture without the defender taking notice. Our defender has access to the entire training dataset and the ground-truth labels, the trojan model, but only limited computational resources.
4 FEATURE GRINDING
4.1 MOTIVATION
A DNN classifier is composed of a sequence of functions σ(fm(fm−1(..f1(x)))), where fi represents the i-th hidden layer, σ(·) is the softmax activation and x ∈ Rn represents the input. Feature grinding is applied to the penultimate layer fm−1(·), of the DNN model, which we refer to as the model’s feature extractor. The feature extractor of a model M is commonly referred to as φM (·). This layer is a compressed representation of the input and the features form a high-dimensional feature space. It has been observed that distances between features are semantically meaningful for well-trained models with respect to the task (Zhang et al., 2018).
Inputs belonging to the same class typically form dense clusters in the feature space. A backdoor’s objective is to map a trojan input to a feature vector located within the cluster composing the target class. The goal of a backdoor sanitation method is to disentangle the trojan samples from the cluster of the target class by increasing the separation between the clusters of clean and trojan samples. However, the defender does not know the trigger pattern. There are two conceptual approaches to achieve this disentanglement. Either, the trojan examples are reverse-engineered and then sanitized by projecting them to a different area in the feature space, or all clean examples are moved to a different region in the feature space.
Difficulties with the first approach (reverse-engineering) are that (i) formalizing complex triggers spanning multiple regions of the image may be difficult and (ii) it is computationally expensive to
optimize for an unknown trigger and target label. Once the trigger has been reverse-engineered, an advantage is that the trojan model can be updated with minimal side-effects through unlearning or pruning with the expectation that the model retains a high test accuracy. This idea motivates model inspection methods such as Neural Cleanse (Wang et al., 2019) and TABOR (Guo et al., 2019).
A different methodology, used in Feature Grinding, is to modify the trojaned model using only clean samples without attempting to reverse-engineer trigger patterns. This method is expected to have greater side-effects to the model, but it requires significantly fewer computational resources. For example, fine-pruning (Liu et al., 2018a) updates the trojaned model purely on its actions on clean data. It prunes neurons that are least active when clean data is passed through the model. The goal of Feature Grinding is to relocate all clean samples to a different region in the feature space, as illustrated in Figure 1. The hypothesis is that by moving clean samples, the trojaned samples retain their feature representation which disentangles them from the clean target samples.
4.2 GRINDING AND RESTORATION
The goal of Feature Grinding is to modify the model’s feature extractor so that the updated, grinded model’s feature extractor predicts transformed features. First, the defender resets the parameters of the model’s head, which refers to the weights and biases of all layers that are on top of the model’s feature extraction layer. Then, the defender records all feature activations of the training dataset and perturbs them by applying a static, randomized transformation function before fine-tuning the victim model on these transformed features.
Feature Grinding passes through two phases: grinding and restoration. In the grinding phase, the victim model is fine-tuned to predict the transformed features given clean samples as inputs. The model may lose some of its test accuracy during the grinding phase. As compensation, we use a restoration phase that focuses on regaining the model’s test accuracy. Both phases can be incorporated into any standard training procedure by altering the model’s loss function.
Assume the defender receives a trojan model M and wants to derive a model M ′ that has been sanitized of any backdoor present in M . When using Feature Grinding, the defender adds a term Lf to the model’s loss during the grinding and restoration phases. Lf can be described as follows, given some transformation T (·), the feature extractor φM (·) and some input x for model M .
Lf (x) = ‖φM ′(x)− T (φM (x))‖
The total loss L for both phases is a sum of the task loss Lt and the Feature Grinding loss Lf . We use a parameter α ∈ R to trade-off both loss terms. In our experiments, we use α = 0.8 during the grinding phase and α = 0.2 during the restoration phase.
L(x) = αLf (x) + (1− α)Lt(x)
4.3 TRANSFORMATION
The transformation function is used to perturb the feature space of the victim model. Choosing a transformation function influences the efficiency of Feature Grinding. An effective transformation ensures that the success rate of the backdoor is decreased as much as possible (e.g., by retraining the entire model). An efficient transformation adds an additional constraint by trying to minimize the resources spent, i.e., the number of steps needed for the victim model to learn the transformed feature space, while remaining effective at sanitizing the backdoor.
Proposed Transformations. We design transformation functions that follow our intuition on achieving high efficiency. We experiment with the following transformation functions.
1. Permute (Tp): The perturbation consists of a random permutation of all features. The permutation is sampled randomly once and then applied to all features.
2. Rotate (Tr): The features are rotated in the n-dimensional space by sampling random rotation matrices using the approach of Stewart (Stewart, 1980).
3. Rotate-2d (Tr2)): The features are rotated in a randomly sampled, two dimensional plane by an angle θ.
All surveyed transformations are (i) sampled randomly once and (ii) they are automorphisms. Randomization is important because the applied transformation must be kept secret from the adversary to avoid them from adapting their backdoor to the transformation. The defender should post-hoc dispose of their records about the applied transformation. We believe that transformations which are automorphisms are advantageous because they preserve the structure of the high-dimensional space which shortens the restoration phase.
5 EVALUATION
In this section, we present our experimental setup and describe our evaluation criteria including the proposed AUPOC metric. Then we show the performance of Feature Grinding against seven contemporary backdoor attacks. We compare Feature Grinding with four other contemporary backdoor sanitation methods: Fine-Pruning (FP) (Liu et al., 2018a), Neural Cleanse (NC) (Wang et al., 2019), TABOR (Guo et al., 2019) and Neural Attention Distillation (NAD) (Li et al., 2021).
5.1 SETUP
In this section, we describe the setup for our experiments.
Hardware and Software. We perform all experiments on a local machine with a single A100 GPU with 40 GByte of VRAM and an AMD EPYC 7302 16-Core Processor. We conduct our experiments using PyTorch 1.9.0 and the trojanvision (Pang et al., 2020b) package version 1.0.10 that implements many of the surveyed backdoor attacks and defenses.
Datasets. We experiment with the following two standard image classification datasets.
• CIFAR-10 (Krizhevsky et al., 2009): Contains 50k training and 10k testing images with a dimension of 32x32 pixels and 10 class labels.
• ImageNet (Deng et al., 2009): Contains 1.23m training and 50k testing images with 1k class labels and we resize and center-crop the images to 256x256 pixels.
Network Architectures. We use standard training procedures for training a ResNet-18 (He et al., 2016) on CIFAR-10. The model is trained for 120 epochs with a learning rate initialized at 0.1. We decrease the learning rate by a factor of 10 when the model’s loss does not improve for two epochs and we use random cropping and cutout as data augmentation strategies. Our clean model achieves 96.06% test accuracy which is similar to the value reported in the original ResNet paper. For ImageNet, we rely on a pre-trained ResNet-50 that is made publicly available through the torchvision package2. The model has a test accuracy of 76.15%.
2https://pytorch.org/vision/stable/models.html
Training Time. We measure the total training time of a clean model on CIFAR-10 and ImageNet as a point of reference to interpret the efficiency of a backdoor defense. For CIFAR-10, we observe that a model can be trained in 34 minutes on our hardware. For ImageNet, we measure the runtime for a single epoch and estimate the model’s total training time to be 70 hours for 120 training epochs. Table 1 shows the runtime for each defense. We observe that Feature Grinding is more than 11× faster than the runtime optimized version of Neural Cleanse and 16× faster than TABOR. Evaluation Metrics. In summary, we empirically measure the following four metrics.
• CDA (Clean Data Accuracy): The accuracy of the model on an unseen test dataset. • ASR (Attack Success Rate): The rate at which the trojan model predicts the target label for
malicious inputs that contain the backdoor trigger. • ARR (Attack Recovery Rate): The rate at which a trojan model predicts the ground-truth
label for malicious inputs that contain the backdoor trigger. • Runtime: The runtime of a defense on our hardware.
AUPOC. We want to compare the effectiveness of two defenses by their CDA and ASR. There is an apparent correlation between both metrics, where a low CDA predicts a low ASR. For example, assume the defender randomly assigns all weights of the victim model (CDA is equivalent to random guessing), then the ASR is expected to be no higher than random guessing as well. For a fair comparison between defenses, we derive a combined value from the CDA and ASR that allows a pairwise comparison of backdoor defenses. We propose using the Area Under the Pareto-Optimal Curve (AUPOC) that we record by ablating over multiple sets of parameters for each defense.
For a given defense and attack, the AUPOC can be derived as follows. We identify a set of parameters for the defense to include in an ablation study. Then, for each set of parameters, we record the ASR and CDA as a single data point. We draw the pareto-frontier between points that are Paretooptimal, i.e., those points for which no other points exist that have a lower ASR and a higher CDA. To achieve closure of the curve, we add a point at an ASR of 1.0 with the highest CDA of the Paretofrontier. Let f(x) be the piece-wise function connecting all the points in the Pareto-frontier, then the AUPOC is simply the integral over that curve.
AUPOC = ∫ 1 0 f(x) dx
Note that a ’perfect’ AUPOC of 1.0 means that the defense achieves a CDA of 1.0 at an ASR of 0.0. This is unattainable if the clean model’s CDA is lower than 1.0, otherwise, the defense would improve the clean model. AUPOC is a relative measure to compare defenses under the same conditions. It is not useful as an absolute measure due to its sensitivity to the clean model’s CDA.
Parameter Ablation. Computing the AUPOC for each defense against each backdoor attack requires ablating over multiple sets of parameters for each defense. We rely on the parameters proposed in the author’s paper for all surveyed backdoor defenses. Since we are interested in comparing efficiency, we keep the runtime constant and ablate over parameters such as the learning rate, rather than the number of epochs. For Fine-Pruning, we ablate over the pruning rate ρ ∈ [0.05, 0.99]. We optimize triggers in Neural Cleanse and TABOR for ten epochs and ablate over the learning rate α ∈ [0.00002, 0.00001] used for unlearning a trigger pattern. For NAD, we train the student and teacher model for five epochs each and ablate over the learning rate α ∈ [0.0001, 0.001]. We use the same cutout data augmentation strategy as the authors. We run Feature Grinding for five epochs and ablate over the number of epochs spent in the grinding stage e ∈ {2, 4} and the three transformations proposed in Section 4.2. All results for CIFAR-10 are computed as the mean value
over three repetitions and a single repetition for ImageNet (due to higher computational demands). Since defenses on ImageNet do not require the entire training data, we give each defense access to a random subset of 100k clean training samples rather than the whole dataset of 1.23m records.
5.2 EFFECTIVENESS
Our goal is to measure the effectiveness of each backdoor attack against each defense. Figures 2 and 3 show the Pareto-optimal Curves and AUPOC metrics for all five defenses. The plot shows the ASR on the x-axis in relation to the CDA and we show only data points that are members of the Pareto frontier. Defenses with a higher AUPOC are more effective.
CIFAR-10: Figure 2 show the results for CIFAR-10. We observe that all defenses are effective at sanitizing a majority of the backdoors. Fine-Pruning has a relatively low AUPOC of less than 0.9 against the ABE, Latent Backdoor and Badnet attacks. For example, the best (i.e., lowest) ASR against the ABE backdoor is about 20 percent, which is relatively high compared to other sanitation methods such as Feature Grinding, which achieves a best ASR of 0 percent. The remaining defenses sanitize all backdoors effectively with an AUPOC of at least 0.937. Feature Grinding performs has the highest AUPOCs out of all defenses against every backdoor attack. NAD effectively sanitizes the backdoors but the models CDAs are about 1 percent lower compared to Feature Grinding.
ImageNet: Figure 3 show the results for ImageNet. We observe that it is more difficult to remove backdoors from ImageNet models than from CIFAR-10 models. Fine-Pruning achieves a low AUPOC of less than 0.5 against the Latent Backdoor and the Refool attack. Similarly, NAD has a low effectiveness against the Badnet and IMC attacks. For example, the best ASR that NAD achieves against IMC is still about 57 percent. Neural Cleanse and TABOR are completely ineffective against the Refool attack with an AUPOC of 0.021 and 0.030, which is expected because Refool applies the trigger pattern across the entire image. Feature Grinding achieves relatively high AUPOCs against each backdoor attack. We measure the lowest AUPOC of 0.585 against the IMC attack, meaning
that the best (i.e., lowest) ASR is about 22 percent. Overall, we observe that Feature Grinding has the best worst-case performance against all backdoor attacks compared to all other defenses.
5.3 ATTACK RECOVERY
In this experiment, we compare the attack recovery rates (ARR) for each defense. The results for CIFAR-10 and ImageNet are shown in Figures 2f and 3f. The bar charts show the defense on the xaxis and the ARR on the y-axis and the dashed horizontal line represents the clean model’s CDA. A high ARR indicates that the sanitized model predicts the correct, ground-truth class for a backdoored input and shows the defense’s effectiveness against a non-targeted attacker.
We observe that for both datasets, Feature Grinding has a high ARR against every attack. This supports the hypothesis stated in Section 4.1 that Feature Grinding successfully disentangles clean and backdoored samples. For CIFAR-10, Neural Cleanse and TABOR have the highest ARRs, followed by Feature Grinding and NAD. Fine-Pruning has a significantly lower ARR against multiple backdoors. On ImageNet, Feature Grinding has similar ARRs as Neural Cleanse and TABOR, except for Refool against which Feature Grinding has a significantly higher ARR than all other defenses.
6 CONCLUSION
We proposed Feature Grinding as an efficient backdoor sanitation method that can be performed using a low amount of computational resources. Our experiments on the image classification datasets CIFAR-10 and ImageNet have shown that Feature Grinding is highly efficient and can sanitize all seven surveyed backdoors. On ImageNet, Feature Grinding is approximately 11× faster than Neural Cleanse and about 16× faster than TABOR with a similar (or better) effectiveness. We propose the AUPOC metric to fairly evaluate the effectiveness of backdoor sanitation methods. Our evaluation shows that other fast sanitation methods from related work, such as Fine Pruning and Neural Attention Distillation, do not achieve the same AUPOC as Feature Grinding against all backdoor attacks. We hope that our work leads to further research into efficient backdoor sanitation methods. | 1. What is the focus and contribution of the paper regarding backdoor removal?
2. What are the strengths and weaknesses of the proposed approach compared to other baseline methods?
3. Do you have any concerns about the threat model used in the paper?
4. How does the reviewer assess the novelty and effectiveness of the proposed technique?
5. What are some limitations and suggestions for future work regarding the evaluation and presentation of the results? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a new backdoor removal technique by transforming the activations from a feature layer (the penultimate layer) and fine-tuning the trojaned model. Specifically, it records all the activation values of the feature layer from all the clean samples in the entire training dataset and applies a pre-defined transformation on these activations. It then fine-tunes the trojaned model by minimizing the difference between the activations from the model and the transformed ones. The classification loss is also considered during fine-tuning. The evaluation is conducted on two datasets, CIFAR-10 and ImageNet. The experimental comparison with four baselines shows that the proposed technique has better performance on removing backdoors.
Review
It is interesting to use transformations on internal features for backdoor removal. The performance of the proposed technique is compared with four baselines. There are a few aspects that need improvements.
The threat model of this paper is impractical. It requires the defender having the entire training dataset. If the defender has all the data, there is no need to use a model trained by a third party. The defender can directly train the model by herself. Even with the assumption of the defender having limited resources, the defender can use transfer learning to fine-tune her model on an existing well-trained feature extractor. The resource usage is no different from fine-tuning the trojaned model. Also, this paper claims "we do not put hard constraints on the defender’s computational power", which makes the setup of this paper more impractical. As this is the base of all the evaluations and comparisons with other methods, it is particularly important to justify the threat model.
The intuition why the proposed approach works is not clear. The paper uses a conceptual explanation in Figure 1 to illustrate the intuition behind the proposed approach. From the figure, it can be observed that there are half of the poisoned samples still falling into the target class, meaning it can only mitigate half of the backdoor effect. The transformations used in the paper are mainly changing the ordering of feature dimensions. Although the approach records activations of clean samples, the feature ordering also change correspondingly for backdoor samples. It is not clear why the proposed approach would work.
The novelty of the proposed approach is limited. The idea is straightforward by constraining the internal features with transformed counterparts. It is very similar to NAD, where the student model's features are constrained by matching with those from the teacher model. The transformations seem interesting. But they are just standard matrix transformations. Have those transformations been used in model training in existing works?
The evaluation is only conducted on two datasets and two model structures. There are a large set of (thousands of) pre-trained poisoned models from the TrojAI competition. Those models are trojaned with various backdoor settings, including different backdoors such as polygon triggers and filter triggers, and different attack models such as universal attacks and label-specific attacks. It is important to see the performance of the proposed technique on extensive test cases with comparison to baselines.
Figure 2 and 3 are hard to interpret. Different subfigures do not have a consistent value range for both the x-axis and y-axis, making the results hard to compare across different approaches. The presentation can be improved by showing the results of different approaches in the same figure. The results on different attacks can be placed in separate subfigures.
Typos and minor issues.
In abstract, "five other sanitation methods" -> "four other sanitation methods".
What is the task loss L_t? Is it the cross entropy loss?
On page 7, the learning rate range for Neural Cleanse and TABOR is "[0.00002, 0.00001]". The upper bound is smaller than the lower bound.
On page 8, for CIFAR-10, "Feature Grinding performs has" -> "Feature Grinding has". |
ICLR | Title
Feature Grinding: Efficient Backdoor Sanitation in Deep Neural Networks
Abstract
Training deep neural networks (DNNs) is expensive and for this reason, third parties provide computational resources to train models. This makes DNNs vulnerable to backdoor attacks, in which the third party maliciously injects hidden functionalities in the model at training time. Removing a backdoor is challenging because although the defender has access to a clean, labeled dataset, they only have limited computational resources which are a fraction of the resources required to train a model from scratch. We propose Feature Grinding as an efficient, randomized backdoor sanitation technique against seven contemporary backdoors on CIFAR-10 and ImageNet. Feature Grinding requires at most six percent of the model’s training time on CIFAR-10 and at most two percent on ImageNet for sanitizing the surveyed backdoors. We compare Feature Grinding with five other sanitation methods and find that it is often the most effective at decreasing the backdoor’s success rate while preserving a high model accuracy. Our experiments include an ablation study over multiple parameters for each backdoor attack and sanitation technique to ensure a fair evaluation of all methods. Models suspected of containing a backdoor can be Feature Grinded using limited resources, which makes it a practical defense against backdoors that can be incorporated into any standard training procedure.
1 INTRODUCTION
Deep neural networks (DNNs) are large and complex. Many systems deployed in the real world make use of DNNs, such as surveillance systems (Singh et al., 2018; Wang et al., 2017), self-driving cars (Bojarski et al., 2016; Dosovitskiy et al., 2017), and biometric authentication systems (Boles & Rad, 2017; Liu et al., 2018b). Training DNNs is expensive and for this reason, computation is often outsourced to third parties or publicly available, pre-trained DNNs are re-used. This convenience comes at the cost of security, as these third parties may act maliciously.
A critical security threat are backdoor attacks. Thereby, the third-party embeds hidden functionality into a trojan model that forces targeted misclassifications when a trigger is present in an input. The model functions normally for inputs without a trigger. In practice, backdoors can lead to crashes in self-driving cars (Versprille, 2015), surveillance systems with blind spots (Cooper, 2014), and biometric authentication systems granting access to unauthorized persons (Lovisotto et al., 2020).
Backdoor attacks and defenses are a well-studied subject in the field of secure machine learning. Backdoor attacks have the goal to remain effective by achieving a high attack success rate, being hard to detect and robust against model modification and sanitation. Existing backdoor attacks assume various capabilities of an attacker, such as (i) poisoning the training dataset (Gu et al., 2017; Liu et al., 2017; 2020; Pang et al., 2020a; Shokri et al., 2020), (ii) modifying the model’s training code (Turner et al., 2018; Saha et al., 2020) or (iii) controlling the trojaned model’s architecture (and parameters) (Hong et al., 2021; Yao et al., 2019; Tang et al., 2020).
Backdoor defenses aim to decrease the attack’s success rate as much as possible to make the exploitation of a backdoor unreliable in practice. Thereby, the defender has access to a set of non-poisoned, clean data with ground-truth labels and is given a model suspected of containing a backdoor. Defenses can be deployed at the model’s inference or training stage. Defenses deployed at inference time either pre-process inputs with the goal to render triggers unrecognizable (Cohen et al., 2019; Meng & Chen, 2017), or they run a detection algorithm for every input to predict whether it contains
a trigger (Chen et al., 2018; Udeshi et al., 2019). Defenses deployed during training either preemptively sanitize a model suspected of containing a backdoor (Liu et al., 2018a; Li et al., 2021), or they run a backdoor detection algorithm before sanitation (Wang et al., 2019; Guo et al., 2019).
Existing backdoor defenses are evaluated with a focus on their effectiveness at sanitizing a backdoor while maintaining the model’s utility. We find that evaluating the defense’s efficiency is often neglected. For example, the runtime of Neural Cleanse (Wang et al., 2019) scales proportionally with the number of classes, which can be feasible for 10 classes, but becomes infeasible for 1k classes. In practice, the motivation of a defender to engage with third parties and rely on their pre-trained models or computational resources is often rooted in a lack of adequate resources in the defender’s control. Maintaining high-performance hardware may be more expensive than booking resources on-demand from third parties (Saiyeda & Mir, 2017). Pre-trained models are readily available online at low or no cost1. Defenses have to be executed in a trusted environment on resources available to the defender. The decision of whether to use a defense is bounded by the defender’s available resources. We believe that a simple, minimal defense leveraging as few computational resources as possible while remaining effective is missing from related work.
We propose Feature Grinding as an efficient backdoor sanitation method. Feature Grinding requires low computational resources compared with four state-of-the-art backdoor sanitation approaches and achieves similar effectiveness. Our defense acts as a regularization method on the penultimate layer, also referred to as the feature layer, of the trojan DNN. The goal is to apply a transformation that increases the distance between predicted features of clean and trojan samples from the same target class. Feature Grinding requires only access to clean samples. Our experiments on the image classification datasets CIFAR-10 (Krizhevsky et al., 2009) and ImageNet (Deng et al., 2009) demonstrate that these transformations can be (i) learned quickly by the trojan model and (ii) that they sanitize backdoors effectively.
1.1 CONTRIBUTIONS
In summary, we claim the following contributions.
1. We propose an efficient backdoor sanitation method called Feature Grinding that can be performed with limited computational resources.
2. Feature Grinding sanitizes trojan models from all seven surveyed backdoor attacks.
3. We conduct an extensive evaluation on the image classification datasets CIFAR-10 and ImageNet, comparing Feature Grinding with four other backdoor sanitation methods.
4. We propose a metric to compare sanitation methods called Area Under the Pareto-Optimal Curve (AUPOC). AUPOC shows the trade-off between the clean data accuracy (CDA) and the attack’s success rate (ASR). We ablate over sets of parameters for each defense and compute the AUC for the best (i.e., Pareto-optimal) parameters given the CDA and ASR.
2 RELATED WORK
2.1 BACKDOORS ATTACK
A deep neural network (DNN) contains a backdoor if an attacker can add a secret backdoor trigger to inputs presented to a victim model, which causes targeted misclassifications. These triggers are typically hidden (e.g., small or imperceptible) and are only known to the attacker. Existing backdoor attacks assume different capabilities of an attacker, which can be summarized as follows.
• Poisoning: The attacker can inject poisoned samples into the training dataset. In clean label attacks, the attacker can only poison inputs, but cannot control their target labels.
• Training Code: The attacker can modify the training code (e.g., the model’s loss function).
• Model Architecture: The attacker has control over the victim model’s architecture.
1https://modelzoo.co
We study seven contemporary backdoor attacks from related work. In this paper, we focus on attacks that assume the attacker can (i) poison the training data or (ii) modify the training code. The seven surveyed backdoor attacks from related work can be summarized as follows.
Badnet (Gu et al., 2017) assumes that an attacker can poison the training data, but not modify the training code. The authors propose injecting samples using static trigger patterns such as a white square with poisoned labels. Clean-Label (Turner et al., 2018) is the first poisoning attack that does not require changing the poisoned input’s target labels. This makes it more difficult for the defender to remove poisoned inputs from their dataset before training the model. They stamp a nearly opaque trigger on inputs from the target class and adversarially perturb them to impede the victim model’s ability to learn from the image’s content and instead learn to associate the trigger with the target class. Refool (Liu et al., 2020) is a clean label attack that uses natural reflections as a trigger, blending other images with images from the target class, to embed their backdoor.
TrojanNN (Liu et al., 2017) requires control over the model’s training code for fine-tuning the victim model. They generate a modified trigger pattern optimized for exciting a subset of the victim model’s internal neurons and then fine-tune the model to embed a backdoor with these triggers. The Latent backdoor (Yao et al., 2019) has been designed to withstand the transfer learning process, in which a pre-trained, trojan teacher is fine-tuned for a related, but different task. They assume full control over the teacher’s training process and add a loss term that encourages embedding the backdoor in the lower layers of the teacher model. The Adversarial Backdoor Embedding (ABE) method (Shokri et al., 2020) modifies the model’s loss function during training to minimize the distance between poisoned and clean samples in the victim model’s feature layer. The Input-Model Co-Optimization (IMC) method (Pang et al., 2020a) optimizes both the victim model and the trigger used to implement the backdoor. The authors formulate an objective to craft a backdoor in conjunction with a trojan model and use alternating optimization of the model and inputs.
2.2 BACKDOOR DEFENSES
The goal of a backdoor defense is to prevent an attacker from exploiting their backdoor by either suppressing it through input preprocessing or by sanitizing the model. Following the categorization by Pang et al. (2020b), there are four categories of backdoor defenses.
• Input Reformation: Each input to the victim model is pre-processed with the goal to disable a trigger by rendering it unrecognizable to the victim model.
• Input Filtering: A binary classification is made for each input to the victim model whether it contains a trigger. Inputs that are predicted to contain a trigger can be discarded.
• Model Sanitation: The victim model is modified with the goal of losing the backdoor’s functionality while preserving the task’s functionality. Sanitation is done preemptively without prior detection of a backdoor.
• Model Inspection: An extension of model sanitation methods that first discover and reverse-engineer backdoors before sanitizing the model.
We survey four model sanitation or inspection methods from related work. All methods assume access to (i) the trojan model’s parameters and (ii) a subset of clean training data, but not to the backdoor’s trigger patterns. Fine-Pruning (Liu et al., 2018a) iteratively prunes dormant neurons that are suspected to implement a backdoor. Neural Attention Distillation (NAD) (Li et al., 2021) is a method to quickly distill a student using the teacher’s attention maps without retraining the student from scratch. Neural Cleanse (Wang et al., 2019) and TABOR (Guo et al., 2019) are model inspection methods that reverse-engineer a backdoor by an optimization process. The objective is to optimize inputs for a static trigger pattern that adversarially modifies the trojan model’s prediction of any input towards a target class. Since the target class is unknown, both methods iterate through each candidate target class and generate at least one trigger per class. TABOR is an extension of Neural Cleanse that specifically allows reverse-engineering large and complex triggers. The authors propose two methods to remove reverse-engineered triggers from a model. The first approach uses unlearning in which the model is fine-tuned with the trigger and the ground-truth label. The second approach uses pruning, in which those internal neurons of the trojan model are removed with the highest activation for the reverse-engineered trigger pattern.
3 THREAT MODEL
Training a DNN model is expensive and hence computation is often outsourced to third parties. As input, the defender specifies (i) the training dataset including ground-truth labels, (ii) the training code and (iii) the model architecture. During training, the attacker (i) injects poisoned data into the training dataset and (ii) modifies the training code, but they cannot modify the trojan model’s architecture. After training, the attacker sends the trojan model to the defender. The defender’s objective is to sanitize any backdoors present in the model using limited computational resources.
The defender’s primary goal is to sanitize the backdoor against a targeted attacker. A targeted attacker wins if the trojan model has a high success rate of predicting a trojan input with the target label. Given a trigger δ, a clean dataset D, a target label t and a model M , the attacker wins if the following condition holds for > 0.
Pr x∈D
[M(x+ δ) = t] > 1−
The defender’s secondary goal is to defend against a non-targeted attacker. A non-targeted attacker wins when the model has a high probability of misclassifying trojan inputs as any target class other than the ground-truth. Defending against non-targeted attacks is more challenging for the defender.
We believe that a defender with limited computational resources is a realistic and practically relevant assumption for multiple reasons. Prices for third party computational resources are decreasing and in many scenarios, it becomes more affordable to occasionally book external resources rather than continually maintaining dedicated hardware. Professional third party hardware often features hightiered GPUs that allow for rapid development due to low training times. A defender may have the capabilities to run a few training steps themselves on their hardware. However, the aforementioned higher monetary costs and runtimes may deter the defender from performing backdoor sanitation and make them more inclined to accept the risk of a backdoor. The most effective backdoor sanitation approaches are useless if their runtime exceeds the defender’s limits. Efficient backdoor sanitation strategies try to minimize the runtime, to allow acting on even a weak suspicion of a backdoor present in their model. We do not put hard constraints on the defender’s computational power, but instead, compare sanitation methods by their execution time relative to the model’s training time.
In summary, our attacker has access to the entire training dataset and its ground-truth labels, the model’s training code, and significant computational resources. However, they cannot modify the model’s architecture without the defender taking notice. Our defender has access to the entire training dataset and the ground-truth labels, the trojan model, but only limited computational resources.
4 FEATURE GRINDING
4.1 MOTIVATION
A DNN classifier is composed of a sequence of functions σ(fm(fm−1(..f1(x)))), where fi represents the i-th hidden layer, σ(·) is the softmax activation and x ∈ Rn represents the input. Feature grinding is applied to the penultimate layer fm−1(·), of the DNN model, which we refer to as the model’s feature extractor. The feature extractor of a model M is commonly referred to as φM (·). This layer is a compressed representation of the input and the features form a high-dimensional feature space. It has been observed that distances between features are semantically meaningful for well-trained models with respect to the task (Zhang et al., 2018).
Inputs belonging to the same class typically form dense clusters in the feature space. A backdoor’s objective is to map a trojan input to a feature vector located within the cluster composing the target class. The goal of a backdoor sanitation method is to disentangle the trojan samples from the cluster of the target class by increasing the separation between the clusters of clean and trojan samples. However, the defender does not know the trigger pattern. There are two conceptual approaches to achieve this disentanglement. Either, the trojan examples are reverse-engineered and then sanitized by projecting them to a different area in the feature space, or all clean examples are moved to a different region in the feature space.
Difficulties with the first approach (reverse-engineering) are that (i) formalizing complex triggers spanning multiple regions of the image may be difficult and (ii) it is computationally expensive to
optimize for an unknown trigger and target label. Once the trigger has been reverse-engineered, an advantage is that the trojan model can be updated with minimal side-effects through unlearning or pruning with the expectation that the model retains a high test accuracy. This idea motivates model inspection methods such as Neural Cleanse (Wang et al., 2019) and TABOR (Guo et al., 2019).
A different methodology, used in Feature Grinding, is to modify the trojaned model using only clean samples without attempting to reverse-engineer trigger patterns. This method is expected to have greater side-effects to the model, but it requires significantly fewer computational resources. For example, fine-pruning (Liu et al., 2018a) updates the trojaned model purely on its actions on clean data. It prunes neurons that are least active when clean data is passed through the model. The goal of Feature Grinding is to relocate all clean samples to a different region in the feature space, as illustrated in Figure 1. The hypothesis is that by moving clean samples, the trojaned samples retain their feature representation which disentangles them from the clean target samples.
4.2 GRINDING AND RESTORATION
The goal of Feature Grinding is to modify the model’s feature extractor so that the updated, grinded model’s feature extractor predicts transformed features. First, the defender resets the parameters of the model’s head, which refers to the weights and biases of all layers that are on top of the model’s feature extraction layer. Then, the defender records all feature activations of the training dataset and perturbs them by applying a static, randomized transformation function before fine-tuning the victim model on these transformed features.
Feature Grinding passes through two phases: grinding and restoration. In the grinding phase, the victim model is fine-tuned to predict the transformed features given clean samples as inputs. The model may lose some of its test accuracy during the grinding phase. As compensation, we use a restoration phase that focuses on regaining the model’s test accuracy. Both phases can be incorporated into any standard training procedure by altering the model’s loss function.
Assume the defender receives a trojan model M and wants to derive a model M ′ that has been sanitized of any backdoor present in M . When using Feature Grinding, the defender adds a term Lf to the model’s loss during the grinding and restoration phases. Lf can be described as follows, given some transformation T (·), the feature extractor φM (·) and some input x for model M .
Lf (x) = ‖φM ′(x)− T (φM (x))‖
The total loss L for both phases is a sum of the task loss Lt and the Feature Grinding loss Lf . We use a parameter α ∈ R to trade-off both loss terms. In our experiments, we use α = 0.8 during the grinding phase and α = 0.2 during the restoration phase.
L(x) = αLf (x) + (1− α)Lt(x)
4.3 TRANSFORMATION
The transformation function is used to perturb the feature space of the victim model. Choosing a transformation function influences the efficiency of Feature Grinding. An effective transformation ensures that the success rate of the backdoor is decreased as much as possible (e.g., by retraining the entire model). An efficient transformation adds an additional constraint by trying to minimize the resources spent, i.e., the number of steps needed for the victim model to learn the transformed feature space, while remaining effective at sanitizing the backdoor.
Proposed Transformations. We design transformation functions that follow our intuition on achieving high efficiency. We experiment with the following transformation functions.
1. Permute (Tp): The perturbation consists of a random permutation of all features. The permutation is sampled randomly once and then applied to all features.
2. Rotate (Tr): The features are rotated in the n-dimensional space by sampling random rotation matrices using the approach of Stewart (Stewart, 1980).
3. Rotate-2d (Tr2)): The features are rotated in a randomly sampled, two dimensional plane by an angle θ.
All surveyed transformations are (i) sampled randomly once and (ii) they are automorphisms. Randomization is important because the applied transformation must be kept secret from the adversary to avoid them from adapting their backdoor to the transformation. The defender should post-hoc dispose of their records about the applied transformation. We believe that transformations which are automorphisms are advantageous because they preserve the structure of the high-dimensional space which shortens the restoration phase.
5 EVALUATION
In this section, we present our experimental setup and describe our evaluation criteria including the proposed AUPOC metric. Then we show the performance of Feature Grinding against seven contemporary backdoor attacks. We compare Feature Grinding with four other contemporary backdoor sanitation methods: Fine-Pruning (FP) (Liu et al., 2018a), Neural Cleanse (NC) (Wang et al., 2019), TABOR (Guo et al., 2019) and Neural Attention Distillation (NAD) (Li et al., 2021).
5.1 SETUP
In this section, we describe the setup for our experiments.
Hardware and Software. We perform all experiments on a local machine with a single A100 GPU with 40 GByte of VRAM and an AMD EPYC 7302 16-Core Processor. We conduct our experiments using PyTorch 1.9.0 and the trojanvision (Pang et al., 2020b) package version 1.0.10 that implements many of the surveyed backdoor attacks and defenses.
Datasets. We experiment with the following two standard image classification datasets.
• CIFAR-10 (Krizhevsky et al., 2009): Contains 50k training and 10k testing images with a dimension of 32x32 pixels and 10 class labels.
• ImageNet (Deng et al., 2009): Contains 1.23m training and 50k testing images with 1k class labels and we resize and center-crop the images to 256x256 pixels.
Network Architectures. We use standard training procedures for training a ResNet-18 (He et al., 2016) on CIFAR-10. The model is trained for 120 epochs with a learning rate initialized at 0.1. We decrease the learning rate by a factor of 10 when the model’s loss does not improve for two epochs and we use random cropping and cutout as data augmentation strategies. Our clean model achieves 96.06% test accuracy which is similar to the value reported in the original ResNet paper. For ImageNet, we rely on a pre-trained ResNet-50 that is made publicly available through the torchvision package2. The model has a test accuracy of 76.15%.
2https://pytorch.org/vision/stable/models.html
Training Time. We measure the total training time of a clean model on CIFAR-10 and ImageNet as a point of reference to interpret the efficiency of a backdoor defense. For CIFAR-10, we observe that a model can be trained in 34 minutes on our hardware. For ImageNet, we measure the runtime for a single epoch and estimate the model’s total training time to be 70 hours for 120 training epochs. Table 1 shows the runtime for each defense. We observe that Feature Grinding is more than 11× faster than the runtime optimized version of Neural Cleanse and 16× faster than TABOR. Evaluation Metrics. In summary, we empirically measure the following four metrics.
• CDA (Clean Data Accuracy): The accuracy of the model on an unseen test dataset. • ASR (Attack Success Rate): The rate at which the trojan model predicts the target label for
malicious inputs that contain the backdoor trigger. • ARR (Attack Recovery Rate): The rate at which a trojan model predicts the ground-truth
label for malicious inputs that contain the backdoor trigger. • Runtime: The runtime of a defense on our hardware.
AUPOC. We want to compare the effectiveness of two defenses by their CDA and ASR. There is an apparent correlation between both metrics, where a low CDA predicts a low ASR. For example, assume the defender randomly assigns all weights of the victim model (CDA is equivalent to random guessing), then the ASR is expected to be no higher than random guessing as well. For a fair comparison between defenses, we derive a combined value from the CDA and ASR that allows a pairwise comparison of backdoor defenses. We propose using the Area Under the Pareto-Optimal Curve (AUPOC) that we record by ablating over multiple sets of parameters for each defense.
For a given defense and attack, the AUPOC can be derived as follows. We identify a set of parameters for the defense to include in an ablation study. Then, for each set of parameters, we record the ASR and CDA as a single data point. We draw the pareto-frontier between points that are Paretooptimal, i.e., those points for which no other points exist that have a lower ASR and a higher CDA. To achieve closure of the curve, we add a point at an ASR of 1.0 with the highest CDA of the Paretofrontier. Let f(x) be the piece-wise function connecting all the points in the Pareto-frontier, then the AUPOC is simply the integral over that curve.
AUPOC = ∫ 1 0 f(x) dx
Note that a ’perfect’ AUPOC of 1.0 means that the defense achieves a CDA of 1.0 at an ASR of 0.0. This is unattainable if the clean model’s CDA is lower than 1.0, otherwise, the defense would improve the clean model. AUPOC is a relative measure to compare defenses under the same conditions. It is not useful as an absolute measure due to its sensitivity to the clean model’s CDA.
Parameter Ablation. Computing the AUPOC for each defense against each backdoor attack requires ablating over multiple sets of parameters for each defense. We rely on the parameters proposed in the author’s paper for all surveyed backdoor defenses. Since we are interested in comparing efficiency, we keep the runtime constant and ablate over parameters such as the learning rate, rather than the number of epochs. For Fine-Pruning, we ablate over the pruning rate ρ ∈ [0.05, 0.99]. We optimize triggers in Neural Cleanse and TABOR for ten epochs and ablate over the learning rate α ∈ [0.00002, 0.00001] used for unlearning a trigger pattern. For NAD, we train the student and teacher model for five epochs each and ablate over the learning rate α ∈ [0.0001, 0.001]. We use the same cutout data augmentation strategy as the authors. We run Feature Grinding for five epochs and ablate over the number of epochs spent in the grinding stage e ∈ {2, 4} and the three transformations proposed in Section 4.2. All results for CIFAR-10 are computed as the mean value
over three repetitions and a single repetition for ImageNet (due to higher computational demands). Since defenses on ImageNet do not require the entire training data, we give each defense access to a random subset of 100k clean training samples rather than the whole dataset of 1.23m records.
5.2 EFFECTIVENESS
Our goal is to measure the effectiveness of each backdoor attack against each defense. Figures 2 and 3 show the Pareto-optimal Curves and AUPOC metrics for all five defenses. The plot shows the ASR on the x-axis in relation to the CDA and we show only data points that are members of the Pareto frontier. Defenses with a higher AUPOC are more effective.
CIFAR-10: Figure 2 show the results for CIFAR-10. We observe that all defenses are effective at sanitizing a majority of the backdoors. Fine-Pruning has a relatively low AUPOC of less than 0.9 against the ABE, Latent Backdoor and Badnet attacks. For example, the best (i.e., lowest) ASR against the ABE backdoor is about 20 percent, which is relatively high compared to other sanitation methods such as Feature Grinding, which achieves a best ASR of 0 percent. The remaining defenses sanitize all backdoors effectively with an AUPOC of at least 0.937. Feature Grinding performs has the highest AUPOCs out of all defenses against every backdoor attack. NAD effectively sanitizes the backdoors but the models CDAs are about 1 percent lower compared to Feature Grinding.
ImageNet: Figure 3 show the results for ImageNet. We observe that it is more difficult to remove backdoors from ImageNet models than from CIFAR-10 models. Fine-Pruning achieves a low AUPOC of less than 0.5 against the Latent Backdoor and the Refool attack. Similarly, NAD has a low effectiveness against the Badnet and IMC attacks. For example, the best ASR that NAD achieves against IMC is still about 57 percent. Neural Cleanse and TABOR are completely ineffective against the Refool attack with an AUPOC of 0.021 and 0.030, which is expected because Refool applies the trigger pattern across the entire image. Feature Grinding achieves relatively high AUPOCs against each backdoor attack. We measure the lowest AUPOC of 0.585 against the IMC attack, meaning
that the best (i.e., lowest) ASR is about 22 percent. Overall, we observe that Feature Grinding has the best worst-case performance against all backdoor attacks compared to all other defenses.
5.3 ATTACK RECOVERY
In this experiment, we compare the attack recovery rates (ARR) for each defense. The results for CIFAR-10 and ImageNet are shown in Figures 2f and 3f. The bar charts show the defense on the xaxis and the ARR on the y-axis and the dashed horizontal line represents the clean model’s CDA. A high ARR indicates that the sanitized model predicts the correct, ground-truth class for a backdoored input and shows the defense’s effectiveness against a non-targeted attacker.
We observe that for both datasets, Feature Grinding has a high ARR against every attack. This supports the hypothesis stated in Section 4.1 that Feature Grinding successfully disentangles clean and backdoored samples. For CIFAR-10, Neural Cleanse and TABOR have the highest ARRs, followed by Feature Grinding and NAD. Fine-Pruning has a significantly lower ARR against multiple backdoors. On ImageNet, Feature Grinding has similar ARRs as Neural Cleanse and TABOR, except for Refool against which Feature Grinding has a significantly higher ARR than all other defenses.
6 CONCLUSION
We proposed Feature Grinding as an efficient backdoor sanitation method that can be performed using a low amount of computational resources. Our experiments on the image classification datasets CIFAR-10 and ImageNet have shown that Feature Grinding is highly efficient and can sanitize all seven surveyed backdoors. On ImageNet, Feature Grinding is approximately 11× faster than Neural Cleanse and about 16× faster than TABOR with a similar (or better) effectiveness. We propose the AUPOC metric to fairly evaluate the effectiveness of backdoor sanitation methods. Our evaluation shows that other fast sanitation methods from related work, such as Fine Pruning and Neural Attention Distillation, do not achieve the same AUPOC as Feature Grinding against all backdoor attacks. We hope that our work leads to further research into efficient backdoor sanitation methods. | 1. What is the focus and contribution of the paper regarding backdoor defense?
2. What are the strengths of the proposed approach, particularly its efficiency and effectiveness?
3. What are the weaknesses of the paper, such as the lack of discussion on the method's correctness and missing technical details?
4. Do you have any concerns or suggestions regarding the implementation and evaluation of the proposed defense?
5. What are some limitations of the paper, such as the absence of an ablation study for certain modules and hyperparameters? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes an efficient model reconstruction based backdoor defense, which intends to apply a transformation that increases the distance between predicted features of clean and poisoned samples from the same target class on the penultimate layer. The proposed method is based on an assumption that the transformation will move clean samples while maintaining the position of poisoned samples, and therefore disentangling two types of samples.
Review
Pros 1. The topic is of sufficient significance and interest to ICLR audiences. 2. The paper is well written and easy to follow. 3. Technically, the proposed method is moderately novel.
4. The proposed method seems to be more efficient, while having performance on par with baseline defenses.
Cons 1. Although I recognize the efficiency and the effectiveness of the proposed method, why the proposed method works need further discussion. At least, the author should empirically justify the correctness of the assumption since it is not natural and necessarily true. I will increase my score if this concern can be well addressed.
Missing important technical details.
How the baseline attacks are implemented needs more details provided in the Appendix. For example, the latent backdoor attack is designed for transfer learning. How it can be used to evaluate the proposed backdoor defense which not targets transfer learning?
How the running efficiency of baseline defenses is calculated? The running time of many defenses (e.g., Fine-Pruning and NAD) relies heavily on the running epoch. How the running iterations are determined needs more details. Using their default settings is not acceptable since the defense may have converged far before the last iteration.
It seems that the proposed defense is implemented after the attacked models are received. I think the author should provide the CDA and ASR of all baseline attacks before the defense to ensure that those attacks are trained well.
Missing important experiments.
No discussion about the selection of adopted transformations. How to select suitable transformations needs more discussion.
No ablation study about the grinding and restoration modules.
No ablation study about the effects of hyper-parameter \alpha |
ICLR | Title
Shallow Learning In Materio.
Abstract
We introduce Shallow Learning In Materio (SLIM) as a resource-efficient method to realize closed-loop higher-order perceptrons. Our SLIM method provides a rebuttal to the Minsky school’s disputes with the Rosenblatt school about the efficacy of learning representations in shallow perceptrons. As a proof-of-concept, here we devise a physically-scalable realization of the parity function. Our findings are relevant to artificial intelligence engineers, as well as neuroscientists and biologists.
1 Introduction
How do we best learn representations? We do not yet fully understand how cognition is manifested in any brain, not even in those of a worm (Rankin, 2004). It is an open question if the shallow brain of a worm is capable of working memory, but if it were then it certainly must depart from the mechanistic models of large-scale brains (Eliasmith et al., 2012). Nevertheless, worm-brain inspired learning combined with ”scalable” deep learning architectures have been employed in self-driving cars (Lechner et al., 2020). At present, by scalable we refer to TPU-based architectures (Jouppi et al., 2017) trained by gradient-descent (Rumelhart et al., 1986). However, one could envision a super-scalable future that is less synthetic and based on self-organized nanomaterial systems (Bose et al., 2015; Chen et al., 2020; Mirigliano et al., 2021) that natively realize higher-order (Lawrence, 2022a) and recurrent neural networks. In this short communication, we shall lay yet another brick towards such a future by providing theoretical arguments.
Our perspective on cognitive material systems is illuminated in Figure 1. Deep learning owes its success to our technological capacity to synthesize massively-parallel and programmable electronic circuits. It is yet to fully exploit Darwinian and Hebbian learning methods that pioneers of the cybernetics movement experimented with by training homeostats (Ashby, 1952) and perceptrons (Rosenblatt, 1961). The spirit of Darwinian (Stanley et al., 2019) and Hebbian (Scellier & Bengio, 2017) learning continues to be alive, though. Here, we add fuel to that fire by advocating for an in-materio approach.
Employing physical systems in their native form for solving computational tasks had gained attention due to the efforts of the ’evolution in materio’ community (Miller & Downing, 2002). The earliest result was by Pask (1960) who grew dendritic metallic threads in a ferrous sulphate solution to function as a sound-frequency discriminator (which he called an ear, quite romantically). Now, more recent efforts are under the banner of physical reservoir computing (Tanaka et al., 2019) for realizing sequential functionality. Here, we will commit to combinational functionality by equilibrium-point logic (Lawrence, 2022b) in material systems realizing closed-loop higher-order perceptrons.
2 Theory
Perceptrons were developed by Rosenblatt and his team, and were trained by a Hebbian learning rule (error-controlled reinforcement) with proven guarantees for convergence. Unfortunately, they started recieving a bad rap after Minsky & Papert (1988) published a proof that 2N association neurons are required to learn the N -bit parity function. However, this analysis is only applicable if all neurons are threshold logic gates, what Rosenblatt called simple units. Physical neural networks, on the other hand, can natively realize complex units. Hence, we introduce a shallow learning in materio (SLIM) perceptron as depicted in Figure 2.
For a proof-of-concept, we commit to a minimally connected recurrent network with physical states si from i = 1 : N , yielding a state-space model of the form
ṡi = xi + Fi(si−1, si, si+1), (1)
where Fi is a nonlinear function. We conjecture that all possible N -bit functions may be realized if arbitrary choices of F1:N are allowed. At present there is no engineering theory to design an optimal Fi (even when N = 2).
We first take an approach amicable to discrete mathematics, and demonstrate equilibrium-point logic in Figure 3 with F1:2 designed as piecewise-constant functions. A promising approach to obtain Fi for higher dimensions is to identify an analogy with cellular automaton that are capable of equilibrium-point parity logic in arbitrary dimensions (Betel et al., 2013). Obtaining scaling laws for the volume of state-space in action during equilibrium-point logic may be another worthy problem to ponder upon.
Imposing conditions of physical realizability on Fi would affect the neuronal capacity (Baldi & Vershynin, 2018) of our SLIM perceptron. To obtain an insight into the abundance of unique functions expressable by SLIM, let us consider a unit-resistor learnable-threshold (wi) diode network of the form Fi(si−1, si, si+1) = si−1 + si+1 − 2si − Ramp(si − wi), (2) with s0 ≡ s1 and sN+1 ≡ sN . The above equation is the simplest expression that captures the nonlinear synergetic interactivity found in the Lyapunov-stable resistor tunnel-diode networks studied in
(Lawrence, 2022b). For each i = 1 : N , depending on sgn si − wi, there is a positive or a negative mode of equation 2, and thus there are 2N modes of convergence to equilibrium. Each mode has N eigenvalues, thus there are N2N different timescales. The smallest and largest eigenvalues are plotted in Figure 4, and the eigenvalue spread is larger for higher N . Because the largest eigenvalue is positive, while the system is Lyapunov stable, we may expect a non-trivial mixing of the modes of functionality on the way to equilibrium. This was confirmed empirically for N = 8 and 1000 random arrays of weights with wi ∈ (0, 1). No two weight arrays yielded the same mode of equilibriation for all 2N = 256 inputs, and thus 1000/1000 functions expressed were unique (for N = 3 this was not true and only 266/1000 unique functions were expressed). Wolfram Mathematica code to reproduce this result and investigate it for other values of N is provided in the Appendix.
3 Conclusion
Our contribution here is threefold: (1) a typology of cognitive material systems that puts a spotlight on yet-to-be-appreciated alternatives to deep learning, (2) a mathematically tractable framework to investigate recurrent networks for deep feedforward functionality, (3) framing open problems in equilibrium-point logic. More theory is needed to develop constructive high-dimensional examples, and a statistical analysis of SLIM’s performance. Next steps could be to obtain estimates on the learning duration, and check if it is superior to estimates obtained from the principal convergence theorem for perceptrons (Rosenblatt, 1961, Section 5.5, Theorem 4). Given the well established (in silico) deep learning industry, even with a more practical demonstration, business economics would prevent the shift to a SLIM paradigm in the near future. Nevertheless, the SLIM concept may act as a catalyst for gifted mathematicians to make new connections or help neuroscientists in unravelling the mysteries of small-scale brains.
Reviewer contributions
Reviewer sByK asks why this work should be considered novel, in comparison to earlier concepts such as predictive coding networks. The novelty here comes from using the function Fi in equation 1 to efficiently realize nonlinear predictors in materio, an improvement over the linear weighted-sum predictors (Srinivasan et al., 1982) that were inspired from image-compression techniques designed for conventional computers.
Reviewer joB1 is kind to provide a thoughtful summary, and suggests to compare this work to two other alternatives for realizing the parity function : the complex-weighted neuron of Aizenberg (2008) and the translated multiplicative neuron of Iyoda et al. (2003). In both alternatives, the implementation would be less robust to noise at large N , because only a single neuron is employed (a robust implementation would require a circuit of many physical units for the neuron, making it ”single” only in a mathematical sense). SLIM need not suffer from such crowding problems, because the state-space can grow exponentially in volume with N . Several provably convergent schemes of Hebbian learning as given by Pineda (1987) may be engineered in materio, to act as a generalization of backpropagation for closed-loop higher-order perceptrons.
Based on feedback from all reviewers, the technical novelty of this work has been clarified in the conclusion (contribution no. 2).
A Appendix
Snippet of Wolfram Mathematica code used to estimate the abundance of in-materio functionality. | 1. What is the focus of the paper, and how does it differ from other works in the field?
2. What are the strengths and weaknesses of the proposed network architecture?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or suggestions regarding the lack of analytical and numerical evaluations?
5. Does the reviewer have any recommendations for improving the paper or its contributions? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper suggests an alternative network architecture, but lacks analytical and numerical evaluation.
Strengths And Weaknesses
Strength:
Out of the box
Weakness:
Lack of box
Clarity, Quality, Novelty And Reproducibility
Novel, but might profit from "a gifted mathematicians to make new connections or help neuroscientists in unravelling the mysteries of small-scale brains." |
ICLR | Title
Shallow Learning In Materio.
Abstract
We introduce Shallow Learning In Materio (SLIM) as a resource-efficient method to realize closed-loop higher-order perceptrons. Our SLIM method provides a rebuttal to the Minsky school’s disputes with the Rosenblatt school about the efficacy of learning representations in shallow perceptrons. As a proof-of-concept, here we devise a physically-scalable realization of the parity function. Our findings are relevant to artificial intelligence engineers, as well as neuroscientists and biologists.
1 Introduction
How do we best learn representations? We do not yet fully understand how cognition is manifested in any brain, not even in those of a worm (Rankin, 2004). It is an open question if the shallow brain of a worm is capable of working memory, but if it were then it certainly must depart from the mechanistic models of large-scale brains (Eliasmith et al., 2012). Nevertheless, worm-brain inspired learning combined with ”scalable” deep learning architectures have been employed in self-driving cars (Lechner et al., 2020). At present, by scalable we refer to TPU-based architectures (Jouppi et al., 2017) trained by gradient-descent (Rumelhart et al., 1986). However, one could envision a super-scalable future that is less synthetic and based on self-organized nanomaterial systems (Bose et al., 2015; Chen et al., 2020; Mirigliano et al., 2021) that natively realize higher-order (Lawrence, 2022a) and recurrent neural networks. In this short communication, we shall lay yet another brick towards such a future by providing theoretical arguments.
Our perspective on cognitive material systems is illuminated in Figure 1. Deep learning owes its success to our technological capacity to synthesize massively-parallel and programmable electronic circuits. It is yet to fully exploit Darwinian and Hebbian learning methods that pioneers of the cybernetics movement experimented with by training homeostats (Ashby, 1952) and perceptrons (Rosenblatt, 1961). The spirit of Darwinian (Stanley et al., 2019) and Hebbian (Scellier & Bengio, 2017) learning continues to be alive, though. Here, we add fuel to that fire by advocating for an in-materio approach.
Employing physical systems in their native form for solving computational tasks had gained attention due to the efforts of the ’evolution in materio’ community (Miller & Downing, 2002). The earliest result was by Pask (1960) who grew dendritic metallic threads in a ferrous sulphate solution to function as a sound-frequency discriminator (which he called an ear, quite romantically). Now, more recent efforts are under the banner of physical reservoir computing (Tanaka et al., 2019) for realizing sequential functionality. Here, we will commit to combinational functionality by equilibrium-point logic (Lawrence, 2022b) in material systems realizing closed-loop higher-order perceptrons.
2 Theory
Perceptrons were developed by Rosenblatt and his team, and were trained by a Hebbian learning rule (error-controlled reinforcement) with proven guarantees for convergence. Unfortunately, they started recieving a bad rap after Minsky & Papert (1988) published a proof that 2N association neurons are required to learn the N -bit parity function. However, this analysis is only applicable if all neurons are threshold logic gates, what Rosenblatt called simple units. Physical neural networks, on the other hand, can natively realize complex units. Hence, we introduce a shallow learning in materio (SLIM) perceptron as depicted in Figure 2.
For a proof-of-concept, we commit to a minimally connected recurrent network with physical states si from i = 1 : N , yielding a state-space model of the form
ṡi = xi + Fi(si−1, si, si+1), (1)
where Fi is a nonlinear function. We conjecture that all possible N -bit functions may be realized if arbitrary choices of F1:N are allowed. At present there is no engineering theory to design an optimal Fi (even when N = 2).
We first take an approach amicable to discrete mathematics, and demonstrate equilibrium-point logic in Figure 3 with F1:2 designed as piecewise-constant functions. A promising approach to obtain Fi for higher dimensions is to identify an analogy with cellular automaton that are capable of equilibrium-point parity logic in arbitrary dimensions (Betel et al., 2013). Obtaining scaling laws for the volume of state-space in action during equilibrium-point logic may be another worthy problem to ponder upon.
Imposing conditions of physical realizability on Fi would affect the neuronal capacity (Baldi & Vershynin, 2018) of our SLIM perceptron. To obtain an insight into the abundance of unique functions expressable by SLIM, let us consider a unit-resistor learnable-threshold (wi) diode network of the form Fi(si−1, si, si+1) = si−1 + si+1 − 2si − Ramp(si − wi), (2) with s0 ≡ s1 and sN+1 ≡ sN . The above equation is the simplest expression that captures the nonlinear synergetic interactivity found in the Lyapunov-stable resistor tunnel-diode networks studied in
(Lawrence, 2022b). For each i = 1 : N , depending on sgn si − wi, there is a positive or a negative mode of equation 2, and thus there are 2N modes of convergence to equilibrium. Each mode has N eigenvalues, thus there are N2N different timescales. The smallest and largest eigenvalues are plotted in Figure 4, and the eigenvalue spread is larger for higher N . Because the largest eigenvalue is positive, while the system is Lyapunov stable, we may expect a non-trivial mixing of the modes of functionality on the way to equilibrium. This was confirmed empirically for N = 8 and 1000 random arrays of weights with wi ∈ (0, 1). No two weight arrays yielded the same mode of equilibriation for all 2N = 256 inputs, and thus 1000/1000 functions expressed were unique (for N = 3 this was not true and only 266/1000 unique functions were expressed). Wolfram Mathematica code to reproduce this result and investigate it for other values of N is provided in the Appendix.
3 Conclusion
Our contribution here is threefold: (1) a typology of cognitive material systems that puts a spotlight on yet-to-be-appreciated alternatives to deep learning, (2) a mathematically tractable framework to investigate recurrent networks for deep feedforward functionality, (3) framing open problems in equilibrium-point logic. More theory is needed to develop constructive high-dimensional examples, and a statistical analysis of SLIM’s performance. Next steps could be to obtain estimates on the learning duration, and check if it is superior to estimates obtained from the principal convergence theorem for perceptrons (Rosenblatt, 1961, Section 5.5, Theorem 4). Given the well established (in silico) deep learning industry, even with a more practical demonstration, business economics would prevent the shift to a SLIM paradigm in the near future. Nevertheless, the SLIM concept may act as a catalyst for gifted mathematicians to make new connections or help neuroscientists in unravelling the mysteries of small-scale brains.
Reviewer contributions
Reviewer sByK asks why this work should be considered novel, in comparison to earlier concepts such as predictive coding networks. The novelty here comes from using the function Fi in equation 1 to efficiently realize nonlinear predictors in materio, an improvement over the linear weighted-sum predictors (Srinivasan et al., 1982) that were inspired from image-compression techniques designed for conventional computers.
Reviewer joB1 is kind to provide a thoughtful summary, and suggests to compare this work to two other alternatives for realizing the parity function : the complex-weighted neuron of Aizenberg (2008) and the translated multiplicative neuron of Iyoda et al. (2003). In both alternatives, the implementation would be less robust to noise at large N , because only a single neuron is employed (a robust implementation would require a circuit of many physical units for the neuron, making it ”single” only in a mathematical sense). SLIM need not suffer from such crowding problems, because the state-space can grow exponentially in volume with N . Several provably convergent schemes of Hebbian learning as given by Pineda (1987) may be engineered in materio, to act as a generalization of backpropagation for closed-loop higher-order perceptrons.
Based on feedback from all reviewers, the technical novelty of this work has been clarified in the conclusion (contribution no. 2).
A Appendix
Snippet of Wolfram Mathematica code used to estimate the abundance of in-materio functionality. | 1. What is the focus of the paper regarding neural networks?
2. What are the strengths and weaknesses of the proposed approach compared to other works?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
SLIM is intended to be a perceptron that involves a minimally connected recurrent network. The internal state variables have a nearest-neighbor interaction on a chain. The authors conjecture such networks could realize arbitrary N-bit Boolean functions, in contrast to the limitation of Rosenblatt perceptrons, as shown by Minsky and Papert.
Strengths And Weaknesses
Strengths: The paper explores alternatives to the deep learning architecture and hopes to have self-organized nanomaterial-based learning machines.
Weaknesses: The key idea is not so novel. From a multitude of recurrent neural networks (including the reservoir computing framework, mentioned by the authors) to predictive coding networks, many approaches utilize the idea of a network with internal dynamics, the equilibrium point of which helps perform the task.
There is very little work done to substantiate the claim for the particular architecture proposed.
Clarity, Quality, Novelty And Reproducibility
The purpose of the paper is clear. Quality, novelty and reproducibility is hard to comment on since there is so little material. |
ICLR | Title
Shallow Learning In Materio.
Abstract
We introduce Shallow Learning In Materio (SLIM) as a resource-efficient method to realize closed-loop higher-order perceptrons. Our SLIM method provides a rebuttal to the Minsky school’s disputes with the Rosenblatt school about the efficacy of learning representations in shallow perceptrons. As a proof-of-concept, here we devise a physically-scalable realization of the parity function. Our findings are relevant to artificial intelligence engineers, as well as neuroscientists and biologists.
1 Introduction
How do we best learn representations? We do not yet fully understand how cognition is manifested in any brain, not even in those of a worm (Rankin, 2004). It is an open question if the shallow brain of a worm is capable of working memory, but if it were then it certainly must depart from the mechanistic models of large-scale brains (Eliasmith et al., 2012). Nevertheless, worm-brain inspired learning combined with ”scalable” deep learning architectures have been employed in self-driving cars (Lechner et al., 2020). At present, by scalable we refer to TPU-based architectures (Jouppi et al., 2017) trained by gradient-descent (Rumelhart et al., 1986). However, one could envision a super-scalable future that is less synthetic and based on self-organized nanomaterial systems (Bose et al., 2015; Chen et al., 2020; Mirigliano et al., 2021) that natively realize higher-order (Lawrence, 2022a) and recurrent neural networks. In this short communication, we shall lay yet another brick towards such a future by providing theoretical arguments.
Our perspective on cognitive material systems is illuminated in Figure 1. Deep learning owes its success to our technological capacity to synthesize massively-parallel and programmable electronic circuits. It is yet to fully exploit Darwinian and Hebbian learning methods that pioneers of the cybernetics movement experimented with by training homeostats (Ashby, 1952) and perceptrons (Rosenblatt, 1961). The spirit of Darwinian (Stanley et al., 2019) and Hebbian (Scellier & Bengio, 2017) learning continues to be alive, though. Here, we add fuel to that fire by advocating for an in-materio approach.
Employing physical systems in their native form for solving computational tasks had gained attention due to the efforts of the ’evolution in materio’ community (Miller & Downing, 2002). The earliest result was by Pask (1960) who grew dendritic metallic threads in a ferrous sulphate solution to function as a sound-frequency discriminator (which he called an ear, quite romantically). Now, more recent efforts are under the banner of physical reservoir computing (Tanaka et al., 2019) for realizing sequential functionality. Here, we will commit to combinational functionality by equilibrium-point logic (Lawrence, 2022b) in material systems realizing closed-loop higher-order perceptrons.
2 Theory
Perceptrons were developed by Rosenblatt and his team, and were trained by a Hebbian learning rule (error-controlled reinforcement) with proven guarantees for convergence. Unfortunately, they started recieving a bad rap after Minsky & Papert (1988) published a proof that 2N association neurons are required to learn the N -bit parity function. However, this analysis is only applicable if all neurons are threshold logic gates, what Rosenblatt called simple units. Physical neural networks, on the other hand, can natively realize complex units. Hence, we introduce a shallow learning in materio (SLIM) perceptron as depicted in Figure 2.
For a proof-of-concept, we commit to a minimally connected recurrent network with physical states si from i = 1 : N , yielding a state-space model of the form
ṡi = xi + Fi(si−1, si, si+1), (1)
where Fi is a nonlinear function. We conjecture that all possible N -bit functions may be realized if arbitrary choices of F1:N are allowed. At present there is no engineering theory to design an optimal Fi (even when N = 2).
We first take an approach amicable to discrete mathematics, and demonstrate equilibrium-point logic in Figure 3 with F1:2 designed as piecewise-constant functions. A promising approach to obtain Fi for higher dimensions is to identify an analogy with cellular automaton that are capable of equilibrium-point parity logic in arbitrary dimensions (Betel et al., 2013). Obtaining scaling laws for the volume of state-space in action during equilibrium-point logic may be another worthy problem to ponder upon.
Imposing conditions of physical realizability on Fi would affect the neuronal capacity (Baldi & Vershynin, 2018) of our SLIM perceptron. To obtain an insight into the abundance of unique functions expressable by SLIM, let us consider a unit-resistor learnable-threshold (wi) diode network of the form Fi(si−1, si, si+1) = si−1 + si+1 − 2si − Ramp(si − wi), (2) with s0 ≡ s1 and sN+1 ≡ sN . The above equation is the simplest expression that captures the nonlinear synergetic interactivity found in the Lyapunov-stable resistor tunnel-diode networks studied in
(Lawrence, 2022b). For each i = 1 : N , depending on sgn si − wi, there is a positive or a negative mode of equation 2, and thus there are 2N modes of convergence to equilibrium. Each mode has N eigenvalues, thus there are N2N different timescales. The smallest and largest eigenvalues are plotted in Figure 4, and the eigenvalue spread is larger for higher N . Because the largest eigenvalue is positive, while the system is Lyapunov stable, we may expect a non-trivial mixing of the modes of functionality on the way to equilibrium. This was confirmed empirically for N = 8 and 1000 random arrays of weights with wi ∈ (0, 1). No two weight arrays yielded the same mode of equilibriation for all 2N = 256 inputs, and thus 1000/1000 functions expressed were unique (for N = 3 this was not true and only 266/1000 unique functions were expressed). Wolfram Mathematica code to reproduce this result and investigate it for other values of N is provided in the Appendix.
3 Conclusion
Our contribution here is threefold: (1) a typology of cognitive material systems that puts a spotlight on yet-to-be-appreciated alternatives to deep learning, (2) a mathematically tractable framework to investigate recurrent networks for deep feedforward functionality, (3) framing open problems in equilibrium-point logic. More theory is needed to develop constructive high-dimensional examples, and a statistical analysis of SLIM’s performance. Next steps could be to obtain estimates on the learning duration, and check if it is superior to estimates obtained from the principal convergence theorem for perceptrons (Rosenblatt, 1961, Section 5.5, Theorem 4). Given the well established (in silico) deep learning industry, even with a more practical demonstration, business economics would prevent the shift to a SLIM paradigm in the near future. Nevertheless, the SLIM concept may act as a catalyst for gifted mathematicians to make new connections or help neuroscientists in unravelling the mysteries of small-scale brains.
Reviewer contributions
Reviewer sByK asks why this work should be considered novel, in comparison to earlier concepts such as predictive coding networks. The novelty here comes from using the function Fi in equation 1 to efficiently realize nonlinear predictors in materio, an improvement over the linear weighted-sum predictors (Srinivasan et al., 1982) that were inspired from image-compression techniques designed for conventional computers.
Reviewer joB1 is kind to provide a thoughtful summary, and suggests to compare this work to two other alternatives for realizing the parity function : the complex-weighted neuron of Aizenberg (2008) and the translated multiplicative neuron of Iyoda et al. (2003). In both alternatives, the implementation would be less robust to noise at large N , because only a single neuron is employed (a robust implementation would require a circuit of many physical units for the neuron, making it ”single” only in a mathematical sense). SLIM need not suffer from such crowding problems, because the state-space can grow exponentially in volume with N . Several provably convergent schemes of Hebbian learning as given by Pineda (1987) may be engineered in materio, to act as a generalization of backpropagation for closed-loop higher-order perceptrons.
Based on feedback from all reviewers, the technical novelty of this work has been clarified in the conclusion (contribution no. 2).
A Appendix
Snippet of Wolfram Mathematica code used to estimate the abundance of in-materio functionality. | 1. What is the focus of the paper, and what contribution does it make to the field?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its novelty and relevance to the field?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or questions regarding the proposed architecture and its relation to previous works? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors introduce a structural variant of single-layer perceptrons with added recurrent connections. The model seems to be mostly a standard RNN, with the different of a more general activation function. No evaluation is provided, aside from a hand-designed example.
Strengths And Weaknesses
Strengths:
The paper is short, which makes it easy to read.
Weaknesses:
The main problem tackled in the paper is not a particular concern since the early 80s and is largely solved.
The structure of the paper is unorthodox and lacks most of the required content (concerning the theory and the model) and any practical evaluation.
Figure 3 is difficult to understand, as no explanation is given about the notation in the text nor in the caption.
The proposed architecture seems to be a standard RNN with a more flexible activation function, which however the authors leave undefined.
Related literature is severely lacking, both to justify the significance of the problem addressed and all the work done on it since the 80s.
Clarity, Quality, Novelty And Reproducibility
The paper is written poorly and in a non-standard way, and cannot be accepted in its present form. The work presented does not seem novel, or it is only marginally novel. Almost no implementation details are given, and the only example is designed by hand. |
ICLR | Title
Shallow Learning In Materio.
Abstract
We introduce Shallow Learning In Materio (SLIM) as a resource-efficient method to realize closed-loop higher-order perceptrons. Our SLIM method provides a rebuttal to the Minsky school’s disputes with the Rosenblatt school about the efficacy of learning representations in shallow perceptrons. As a proof-of-concept, here we devise a physically-scalable realization of the parity function. Our findings are relevant to artificial intelligence engineers, as well as neuroscientists and biologists.
1 Introduction
How do we best learn representations? We do not yet fully understand how cognition is manifested in any brain, not even in those of a worm (Rankin, 2004). It is an open question if the shallow brain of a worm is capable of working memory, but if it were then it certainly must depart from the mechanistic models of large-scale brains (Eliasmith et al., 2012). Nevertheless, worm-brain inspired learning combined with ”scalable” deep learning architectures have been employed in self-driving cars (Lechner et al., 2020). At present, by scalable we refer to TPU-based architectures (Jouppi et al., 2017) trained by gradient-descent (Rumelhart et al., 1986). However, one could envision a super-scalable future that is less synthetic and based on self-organized nanomaterial systems (Bose et al., 2015; Chen et al., 2020; Mirigliano et al., 2021) that natively realize higher-order (Lawrence, 2022a) and recurrent neural networks. In this short communication, we shall lay yet another brick towards such a future by providing theoretical arguments.
Our perspective on cognitive material systems is illuminated in Figure 1. Deep learning owes its success to our technological capacity to synthesize massively-parallel and programmable electronic circuits. It is yet to fully exploit Darwinian and Hebbian learning methods that pioneers of the cybernetics movement experimented with by training homeostats (Ashby, 1952) and perceptrons (Rosenblatt, 1961). The spirit of Darwinian (Stanley et al., 2019) and Hebbian (Scellier & Bengio, 2017) learning continues to be alive, though. Here, we add fuel to that fire by advocating for an in-materio approach.
Employing physical systems in their native form for solving computational tasks had gained attention due to the efforts of the ’evolution in materio’ community (Miller & Downing, 2002). The earliest result was by Pask (1960) who grew dendritic metallic threads in a ferrous sulphate solution to function as a sound-frequency discriminator (which he called an ear, quite romantically). Now, more recent efforts are under the banner of physical reservoir computing (Tanaka et al., 2019) for realizing sequential functionality. Here, we will commit to combinational functionality by equilibrium-point logic (Lawrence, 2022b) in material systems realizing closed-loop higher-order perceptrons.
2 Theory
Perceptrons were developed by Rosenblatt and his team, and were trained by a Hebbian learning rule (error-controlled reinforcement) with proven guarantees for convergence. Unfortunately, they started recieving a bad rap after Minsky & Papert (1988) published a proof that 2N association neurons are required to learn the N -bit parity function. However, this analysis is only applicable if all neurons are threshold logic gates, what Rosenblatt called simple units. Physical neural networks, on the other hand, can natively realize complex units. Hence, we introduce a shallow learning in materio (SLIM) perceptron as depicted in Figure 2.
For a proof-of-concept, we commit to a minimally connected recurrent network with physical states si from i = 1 : N , yielding a state-space model of the form
ṡi = xi + Fi(si−1, si, si+1), (1)
where Fi is a nonlinear function. We conjecture that all possible N -bit functions may be realized if arbitrary choices of F1:N are allowed. At present there is no engineering theory to design an optimal Fi (even when N = 2).
We first take an approach amicable to discrete mathematics, and demonstrate equilibrium-point logic in Figure 3 with F1:2 designed as piecewise-constant functions. A promising approach to obtain Fi for higher dimensions is to identify an analogy with cellular automaton that are capable of equilibrium-point parity logic in arbitrary dimensions (Betel et al., 2013). Obtaining scaling laws for the volume of state-space in action during equilibrium-point logic may be another worthy problem to ponder upon.
Imposing conditions of physical realizability on Fi would affect the neuronal capacity (Baldi & Vershynin, 2018) of our SLIM perceptron. To obtain an insight into the abundance of unique functions expressable by SLIM, let us consider a unit-resistor learnable-threshold (wi) diode network of the form Fi(si−1, si, si+1) = si−1 + si+1 − 2si − Ramp(si − wi), (2) with s0 ≡ s1 and sN+1 ≡ sN . The above equation is the simplest expression that captures the nonlinear synergetic interactivity found in the Lyapunov-stable resistor tunnel-diode networks studied in
(Lawrence, 2022b). For each i = 1 : N , depending on sgn si − wi, there is a positive or a negative mode of equation 2, and thus there are 2N modes of convergence to equilibrium. Each mode has N eigenvalues, thus there are N2N different timescales. The smallest and largest eigenvalues are plotted in Figure 4, and the eigenvalue spread is larger for higher N . Because the largest eigenvalue is positive, while the system is Lyapunov stable, we may expect a non-trivial mixing of the modes of functionality on the way to equilibrium. This was confirmed empirically for N = 8 and 1000 random arrays of weights with wi ∈ (0, 1). No two weight arrays yielded the same mode of equilibriation for all 2N = 256 inputs, and thus 1000/1000 functions expressed were unique (for N = 3 this was not true and only 266/1000 unique functions were expressed). Wolfram Mathematica code to reproduce this result and investigate it for other values of N is provided in the Appendix.
3 Conclusion
Our contribution here is threefold: (1) a typology of cognitive material systems that puts a spotlight on yet-to-be-appreciated alternatives to deep learning, (2) a mathematically tractable framework to investigate recurrent networks for deep feedforward functionality, (3) framing open problems in equilibrium-point logic. More theory is needed to develop constructive high-dimensional examples, and a statistical analysis of SLIM’s performance. Next steps could be to obtain estimates on the learning duration, and check if it is superior to estimates obtained from the principal convergence theorem for perceptrons (Rosenblatt, 1961, Section 5.5, Theorem 4). Given the well established (in silico) deep learning industry, even with a more practical demonstration, business economics would prevent the shift to a SLIM paradigm in the near future. Nevertheless, the SLIM concept may act as a catalyst for gifted mathematicians to make new connections or help neuroscientists in unravelling the mysteries of small-scale brains.
Reviewer contributions
Reviewer sByK asks why this work should be considered novel, in comparison to earlier concepts such as predictive coding networks. The novelty here comes from using the function Fi in equation 1 to efficiently realize nonlinear predictors in materio, an improvement over the linear weighted-sum predictors (Srinivasan et al., 1982) that were inspired from image-compression techniques designed for conventional computers.
Reviewer joB1 is kind to provide a thoughtful summary, and suggests to compare this work to two other alternatives for realizing the parity function : the complex-weighted neuron of Aizenberg (2008) and the translated multiplicative neuron of Iyoda et al. (2003). In both alternatives, the implementation would be less robust to noise at large N , because only a single neuron is employed (a robust implementation would require a circuit of many physical units for the neuron, making it ”single” only in a mathematical sense). SLIM need not suffer from such crowding problems, because the state-space can grow exponentially in volume with N . Several provably convergent schemes of Hebbian learning as given by Pineda (1987) may be engineered in materio, to act as a generalization of backpropagation for closed-loop higher-order perceptrons.
Based on feedback from all reviewers, the technical novelty of this work has been clarified in the conclusion (contribution no. 2).
A Appendix
Snippet of Wolfram Mathematica code used to estimate the abundance of in-materio functionality. | 1. What is the main contribution of the paper regarding shallow recurrent neural networks?
2. What are the strengths and weaknesses of the proposed approach, particularly in its illustrative example and open questions for future research?
3. Do you have any concerns or suggestions regarding the model's description, training process, choice of F-functions, and reproducibility?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any minor errors or typos in the paper that should be addressed? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
Goals and contribution:
The authors try to draw attention of the community to shallow recurrent neural networks trained by Hebbian learning in the context of cognitive material systems.
They argue that these models could compete with deep-learning systems in the future - thanks to upcoming advancement of in-materio technologies.
They demonstrate this idea on an illustrative example of the N-bit (2-bit) parity function realized by the new-proposed SLIM (Shallow learning in Materio) model.
The authors also point out several crucial open problems of the proposed approach as topics for future research.
Strengths And Weaknesses
Strengths:
I agree with the authors that the shallow recurrent higher-order neural networks are a promising area of future research - due to the progress of in-materio technologies.
The illustrative example is well-chosen. It demonstrates well how the proposed SLIM model could overcame the classical problem of n-bit parity (Minsky&Pappert,1988) - thanks to the higher-order units used instead of threshold logic gates.
The authors provide several open questions concerning their model as possible subjects of future research.
Limitations:
The paper is very brief and vague in every aspect and its techniqual quality is imited.
There is no theoretical justification of the method (and its convergence) and the empirical evaluation is limited to a toy problem.
Discussion of the limitations:
The paper is very brief and vague in every aspect and its techniqual quality is imited.
Mainly, the proposed model should be described in more detail, including training algorithm and analysis/discussion of the choice of the F_i functions.
The illustrative example also needs a more detailed description. How did you choose F_1,2 and why? Was the model trained by Hebbian learning? Can you show the concrete final model (its F-functions and weights)? Figure 3 is not comprehensible - could you describe more clearly what the four images represent?
There is no theoretical justification of the method (and its convergence) and the empirical evaluation is limited to a toy problem. The paper addresses more limitations and problems of the proposed model than its advances. Many important questions are not addressed:
Is it possible to extend the 2-bit parity model to the n-bit one? Are you able to show a solution for n=3?
The training process of the model is not described/analyzed.
The choice of F-functions for different tasks is not described/analyzed.
From the paper it seems like it is difficult to successfully apply the model to different tasks. Is it right?
Clarity, Quality, Novelty And Reproducibility
Originality:
The main idea of the proposed approach - to apply the recurrent shallow neural networks in the context of in-materio technologies (enabling to use higher-order units) - is novel and interesting. However, the realization is not very convincing (due to the lack of clear description, deeper theoretical or experimental evaluation or comparison to alternative approaches).
Related works seem to be cited adequately, except:
Based on Section 1, the paper closely follows (Lawrence, 2022b). However, (Lawrence, 2022b) seems to be unavailable (the link doesn't work and I was not able to find it elsewhere)
Because the authors concentrate on the n-bit parity function realized by shallow networks, they should compare their approach to some alternative/similar models and cite them (e.g., [1], [2]).
Clarity and reproducibility:
Both the proposed model and the experiment should be described in more detail. There are many open questions that hinder reproducibility (e.g., missing description of the training process, missing detailed description of the used F-functions,...).
Figure 3 is not well-described and thus harly comprehensible.
Quality:
The technical quality of the paper is poor. The paper looks like an incomplete work in progress (see "Strength And Weaknesses" Section for details).
Minor:
(Rumelhart et al., 1986) is cited as a reference to TPU-based architectures, isn't that an error?
[1] Aizenberg, I. Solving the XOR and parity N problems using a single universal binary neuron. Soft Comput 12, 215–222 (2008). https://doi.org/10.1007/s00500-007-0204-9
[2] Iyoda, E.M., Nobuhara, H. & Hirota, K. A Solution for the N-bit Parity Problem Using a Single Translated Multiplicative Neuron. Neural Processing Letters 18, 233–238 (2003). https://doi.org/10.1023/B:NEPL.0000011147.74207.8c |
ICLR | Title
Noisy Machines: Understanding noisy neural networks and enhancing robustness to analog hardware errors using distillation
Abstract
The success of deep learning has brought forth a wave of interest in computer hardware design to better meet the high demands of neural network inference. In particular, analog computing hardware has been heavily motivated specifically for accelerating neural networks, based on either electronic, optical or photonic devices, which may well achieve lower power consumption than conventional digital electronics. However, these proposed analog accelerators suffer from the intrinsic noise generated by their physical components, which makes it challenging to achieve high accuracy on deep neural networks. Hence, for successful deployment on analog accelerators, it is essential to be able to train deep neural networks to be robust to random continuous noise in the network weights, which is a somewhat new challenge in machine learning. In this paper, we advance the understanding of noisy neural networks. We outline how a noisy neural network has reduced learning capacity as a result of loss of mutual information between its input and output. To combat this, we propose using knowledge distillation combined with noise injection during training to achieve more noise robust networks, which is demonstrated experimentally across different networks and datasets, including ImageNet. Our method achieves models with as much as ∼ 2× greater noise tolerance compared with the previous best attempts, which is a significant step towards making analog hardware practical for deep learning.
N/A
The success of deep learning has brought forth a wave of interest in computer hardware design to better meet the high demands of neural network inference. In particular, analog computing hardware has been heavily motivated specifically for accelerating neural networks, based on either electronic, optical or photonic devices, which may well achieve lower power consumption than conventional digital electronics. However, these proposed analog accelerators suffer from the intrinsic noise generated by their physical components, which makes it challenging to achieve high accuracy on deep neural networks. Hence, for successful deployment on analog accelerators, it is essential to be able to train deep neural networks to be robust to random continuous noise in the network weights, which is a somewhat new challenge in machine learning. In this paper, we advance the understanding of noisy neural networks. We outline how a noisy neural network has reduced learning capacity as a result of loss of mutual information between its input and output. To combat this, we propose using knowledge distillation combined with noise injection during training to achieve more noise robust networks, which is demonstrated experimentally across different networks and datasets, including ImageNet. Our method achieves models with as much as ∼ 2× greater noise tolerance compared with the previous best attempts, which is a significant step towards making analog hardware practical for deep learning.
1 INTRODUCTION
Deep neural networks (DNNs) have achieved unprecedented performance over a wide variety of tasks such as computer vision, speech recognition, and natural language processing. However, DNN inference is typically very demanding in terms of compute and memory resources. Consequently, larger models are often not well suited for large-scale deployment on edge devices, which typically have meagre performance and power budgets, especially battery powered mobile and IoT devices. To address these issues, the design of specialized hardware for DNN inference has drawn great interest, and is an extremely active area of research. To date, a plethora of techniques have been proposed for designing efficient neural network hardware (Sze et al., 2017).
In contrast to the current status quo of predominantly digital hardware, there is significant research interest in analog hardware for DNN inference. In this approach, digital values are represented by analog quantities such as electrical voltages or light pulses, and the computation itself (e.g., multiplication and addition) proceeds in the analog domain, before eventually being converted back to digital. Analog accelerators take advantage of particular efficiencies of analog computation in exchange for losing the bit-exact precision of digital. In other words, analog compute is cheap but somewhat imprecise. Analog computation has been demonstrated in the context of DNN inference in both electronic (Binas et al., 2016), photonic (Shen et al., 2017) and optical (Lin et al., 2018) systems. Analog accelerators promise to deliver at least two orders of magnitude better performance over a conventional digital processor for deep learning workloads in both speed (Shen et al., 2017) and energy efficiency (Ni et al., 2017). Electronic analog DNN accelerators are arguably the most mature technology and hence will be our focus in this work.
The most common approach to electronic analog DNN accelerator is in-memory computing, which typically uses non-volatile memory (NVM) crossbar arrays to encode the network weights as analog values. The NVM itself can be implemented with memristive devices, such as metal-oxide resistive random-access memory (ReRAM) (Hu et al., 2018) or phase-change memory (PCM) (Le Gallo et al., 2018; Boybat et al., 2018; Ambrogio et al., 2018). The matrix-vector operations computed during inference are then performed in parallel inside the crossbar array, operating on analog quantities for weights and activations. For example, addition of two quantities encoded as electrical currents can be achieved by simply connecting the two wires together, whereby the currents will add linearly according to Kirchhoff’s current law. In this case, there is almost zero latency or energy dissipation for this operation.
Similarly, multiplication with a weight can be achieved by programming the NVM cell conductance to the weight value, which is then used to convert an input activation encoded as a voltage into a scaled current, following Ohm’s law. Therefore, the analog approach promises significantly improved throughput and energy efficiency. However, the analog nature of the weights makes the compute noisy, which can limit inference accuracy. For example, a simple two-layer fully-connected network with a baseline accuracy of 91.7% on digital hardware, achieves only 76.7% when implemented on an analog photonic array (Shen et al., 2017). This kind of accuracy degradation is not acceptable for most deep learning applications. Therefore, the challenge of imprecise analog hardware motivates us to study and understand noisy neural networks, in order to maintain inference accuracy under noisy analog computation.
The question of how to effectively learn and compute with a noisy machine is a long-standing problem of interest in machine learning and computer science (Stevenson et al., 1990; Von Neumann, 1956). In this paper, we study noisy neural networks to understand their inference performance. We also demonstrate how to train a neural network with distillation and noise injection to make it more resilient to computation noise, enabling higher inference accuracy for models deployed on analog hardware. We present empirical results that demonstrate state-of-the-art noise tolerance on multiple datasets, including ImageNet.
The remainder of the paper is organized as follows. Section 2 gives an overview of related work. Section 3 outlines the problem statement. Section 4 presents a more formal analysis of noisy neural networks. Section 5 gives a distillation methodology for training noisy neural networks, with experimental results. Finally, Section 6 provides a brief discussion and Section 7 closes with concluding remarks.
2 RELATED WORK
Previous work broadly falls under the following categories: studying the effect of analog computation noise, analysis of noise-injection for DNNs, and use of distillation in model training.
Analog Computation Noise Models In Rekhi et al. (2019), the noise due to analog computation is modeled as additive parameter noise with zero-mean Gaussian distribution. The variance of this Gaussian is a function of the effective number of bits of the output of an analog computation. Similarly, the authors in Joshi et al. (2019) also model analog computation noise as additive Gaussian noise on the parameters, where the variance is proportional to the range of values that their PCM device can represent. Some noise models presented have included a more detailed account of device-level interactions, such as voltage drop across the analog array (Jain et al., 2018; Feinberg et al., 2018), but are beyond the scope of this paper. In this work, we consider an additive Gaussian noise model on the weights, similar to Rekhi et al. (2019); Joshi et al. (2019) and present a novel training method that outperforms the previous work in model noise resilience.
Noise Injection for Neural Networks Several stochastic regularization techniques based on noise-injection and dropout (Srivastava et al., 2014; Noh et al., 2017; Li & Liu, 2016) have been demonstrated to be highly effective at reducing overfitting. For generalized linear models, dropout and additive noise have been shown to be equivalent to adaptive L2 regularization to a first order (Wager et al., 2013). Training networks with Gaussian noise added to the weights or activations can also increase robustness to variety of adversarial attacks (Rakin et al., 2018). Bayesian neural networks replace deterministic weights with distributions in order to optimize over the posterior
distribution of the weights (Kingma & Welling, 2013). Many of these methods use noise injection at inference time to approximate weight distribution; in Gal & Ghahramani (2016) a link between Gaussian processes and dropout is established in an effort to model the uncertainty of the output of a network. A theoretical analysis by Stevenson et al. (1990) has shown that for neural networks with adaptive linear neurons, the probability of error of a noisy neural network classifier with weight noise increases with the number of layers, but largely independent of the number of weights per neuron or neurons per layer.
Distillation in Training Knowledge distillation (Hinton et al., 2015) is a well known technique in which the soft labels produced by a teacher model are used to train a student model which typically has reduced capacity. Distillation has shown merit for improving model performance across a range of scenarios, including student models lacking access to portions of training data (Micaelli & Storkey, 2019), quantized low-precision networks (Polino et al., 2018; Mishra & Marr, 2017), protection against adversarial attacks (Papernot et al., 2016; Goldblum et al., 2019), and in avoiding catastrophic forgetting for multi-task learning (Schwarz et al., 2018). To the best of our knowledge, our work is the first to combine distillation with noise injection in training to enhance model noise robustness.
3 PROBLEM STATEMENT
Without loss of generality, we model a general noisy machine after a simple memristive crossbar array, similar to Shafiee et al. (2016). Figure 1 illustrates how an arbitrary neural network layer, l, such as a typical 3× 3 convolution, can be mapped to this hardware substrate by first flattening the weights into a single large 2D matrix, Wl, and then programming each element of this matrix into a memristive cell in the crossbar array, which provides the required conductances Gl (the reciprocal of resistance) to perform analog multiplication following Ohm’s law, iout = vinG. Note that a pair of differential pair of NVM devices are typically used to represent a signed quantity in Gl. Subsequently, input activations, xl converted into continuous voltages, v(xl), are streamed into the array rows from the left-hand side. The memristive devices connect row with columns, where the row voltages are converted into currents scaled by the programmed conductance, G, to generate the currents i(yl), which are differential in order to represent both positive and negative quantites with unipolar signals. The currents from each memristive device essentially add up for free where they are connected in the columns, according to Kirchhoff’s current law. Finally, the differential currents are converted to bipolar voltages, v(yl), which are they digitized before adding bias, and performing batch normalization and ReLU operations, which are not shown in Figure 1.
However, the analog inference hardware of Figure 1 is subject to real-world non-idealities, typically attributed to variations in: 1) manufacturing process, 2) supply voltage and 3) temperature, PVT variation collectively, all of which result in noise in the system. Below we discuss the two key components in terms of analog noise modeling.
Data Converters. Digital-to-analog converter (DAC) and analog-to-digital converter (ADC) circuits are designed to be robust to PVT variation, but in practice these effects do degrade the resolution (i.e. number of bits). Therefore, we consider effective number of bits (ENOB), which is a lower bound on resolution in the presence of non-idealities. Hence, we use activation and weight quantization with ENOB data converters and no additional converter noise modeling.
NVM cells. Due to their analog nature, memristive NVM cells have limited precision, due to the read and write circuitry (Joshi et al., 2019). In between write and read operations, their stored value is prone to drift over time. Long-term drift can be corrected with periodic refresh operations. At shorter timescales, time-varying noise may be encountered. For most of the experiments in this paper, we model generic NVM cell noise as an additive zero-mean i.i.d. Gaussian error term on the weights of the model in each particular layer ∆Wl ∼ N (∆Wl; 0, σ2N,lI). This simple model, described more concretely in Section 5, is similar to that used by Joshi et al. (2019) which was verified on real hardware. In addition, we also investigate spatially-varying and time-varying noise models in Section 5.2 (Table 1).
4 ANALYSIS OF NOISY NEURAL NETWORKS
4.1 BIAS VARIANCE DECOMPOSITION FOR NOISY WEIGHTS
Naively deploying an off-the-shelf pretrained model on a noisy accelerator will yield poor accuracy for a fundamental reason. Consider a neural network f(W;x) with weights W that maps an input x ∈ Rn to an output y ∈ R. In the framework of statistical learning, x and y are considered to be randomly distributed following a joint probability distribution p(x, y). In a noisy neural network, the weights W are also randomly distributed, with distribution p(W). The expected Mean Squared Error (MSE) of this noisy neural network can be decomposed as
E(x,y)∼p(x,y),W∼p(W)[(f(W;x)− y)2] =E(x,y)∼p(x,y),W∼p(W)[(f(W;x)− EW∼p(W)[f(W;x)] + EW∼p(W)[f(W;x)]− y)2] =Ex∼p(x)[EW∼p(W)[(f(W;x)− EW∼p(W)[f(W;x)])2]]
+ E(x,y)∼p(x,y)[(EW∼p(W)[f(W;x)]− y)2]. (1)
The first term on the right hand side of Equation 1 is a variance loss term due to randomness in the weights and is denoted as lvar. The second term is a squared bias loss term which we call lbias. However, typically a model is trained to minimize the empirical version of expected loss lpretrained = E(x,y)∼p(x,y)[(f(E[W];x) − y)2]. We assume that the noise is centered such that pretrained weights are equal to E[W]. A pretrained model is therefore optimized for the wrong loss function when deployed on a noisy accelerator. To show this in a more concrete way, a baseline LeNet model (32 filters in the first convolutional layer, 64 filters in the second convolutional layer and 1024 neurons in the fully-connected layer) (LeCun et al., 1998) is trained on MNIST dataset to 99.19% accuracy and then exposed to Gaussian noise in its weights, numerical values of these loss terms can be estimated. The expected value of the network output EW[f(W;x)] is estimated by averaging over outputs of different instances of the network for the same input x. We perform inference on n = 100 different instances of the network and estimate the loss terms as
f(W;x) = EW∼p(W)[f(W;x)] ' 1
n n∑ i=1 f(Wi;x), (2)
l̂var = 1
N N∑ j=1 1 n n∑ i=1 (f(Wi;xj)− f(W;xj))2, (3)
l̂bias = 1
N N∑ j=1 (f(W;xj)− yj)2, (4)
l̂pretrained = 1
N N∑ j=1 (f(E[W];xj)− yj)2. (5)
The above formulas are for a network with a scalar output. They can be easily extended to the vector output case by averaging over all outputs. In the LeNet example, we take the output of softmax layer to calculate squared losses. The noise is assumed i.i.d. Gaussian centered around zero with a fixed SNR σ2W,l/σ 2 N,l in each layer l. The numerical values of the above losses are estimated using
the entire test dataset for different noise levels. Results are shown in Figure 2(a). l̂bias is initially equal to l̂pretrained and l̂var = 0 when there is no noise. However, as noise level rises, they increase in magnitude and become much more important than l̂pretrained. l̂var overtakes l̂bias to become the predominant loss term in a noisy LeNet at σN/σW ' 0.6. It is useful to note that lbias increases with noise entirely due to nonlinearity in the network, which is ReLU in the case of LeNet. In a linear model, lbias should be equal to lpretrained as we would have f(E[W];x) = E[f(W;x)]. A model trained in a conventional manner is thus not optimized for the real loss it is going to encounter on a noisy accelerator. Special retraining is required to improve its noise tolerance. In Figure 2(a), we show how the model accuracy degrades with a rising noise level for the baseline LeNet and its deeper and wider variants. The deeper network is obtained by stacking two more convolutional layers of width 16 in front of the baseline network and the wider network is obtained by increasing the widths of each layer in the baseline to 128, 256, 2048 respectively. Performance degradation due to noise is worse for the deeper variant and less severe for the wider one. A more detailed discussion of the network architecture effect on its performance under noise is offered in Section 4.2
4.2 LOSS OF INFORMATION IN A NOISY NEURAL NETWORK
Information theory offers useful tools to study noise in neural networks. Mutual information I(X;Y ) characterizes the amount of information obtained on random variable X by observing another random variable Y . The mutual information between X and Y can be related to Shannon entropy by I(X;Y ) = H(Y )−H(Y |X). (6) Mutual information has been used to understand DNNs (Tishby & Zaslavsky, 2015; Saxe et al., 2018). Treating a noisy neural network as a noisy information channel, we can show how information about the input to the neural network diminishes as it propagates through the noisy computation. In this subsection, X is the input to the neural network and Y is the output. Mutual information is estimated for the baseline LeNet model and its variants using Equation 6. When there is no noise, the term H(Y |X) is zero as Y is deterministic once the input to the network X is known, therefore I(X;Y ) is just H(Y ) in this case. Shannon entropy H(Y ) can be estimated using a standard discrete binning approach (Saxe et al., 2018). In our experiment, Y is the output of the softmax layer
which is a vector of length 10. Entropy H(Y ) is estimated using four bins per coordinate of Y by
Ĥ(Y ) = − N∑ i=1 pi log(pi), (7)
where pi is the probability that an output falls in the bin i. When noise is introduced to the weights, the conditional entropy H(Y |X) is estimated by fixing the input X = x and performing multiple noisy inferences to calculate Ĥ(Y |X = x) with the above binning approach. Ĥ(Y |X = x) is then averaged over different input x to obtain Ĥ(Y |X). This estimate is performed for LeNet and its variants with different noise levels. Results are shown in Figure 2(b). The values are normalized to the estimate of I(X;Y ) at zero noise. Mutual information between the input and the output decays towards zero with increasing noise in network weights. Furthermore, mutual information in a deeper and narrower network decays faster than in a shallower and wider network. Intuitively, information from the input undergoes more noisy compute when more layers are added to the network, while a wider network has more redundant paths for the information to flow, thus better preserving it. An information theoretic bound of mutual information decay as a function of network depth and width in a noisy neural network will be treated in our follow-up work. Overall, noise is damaging the learning capacity of the network. When the output of the model contains no information from its input, the network loses all ability to learn. For a noise level that is not so extreme, a significant amount of mutual information remains, which indicates that useful learning is possible even with a noisy model.
5 COMBINING NOISE INJECTION AND KNOWLEDGE DISTILLATION
5.1 METHODOLOGY
Noise injection during training is one way of exposing network training to a more realistic loss as randomly perturbing weights simulates what happens in a real noisy analog device, and forces the network to adapt to noise during training. Noise injection only happens in training during forward propagation, which can be considered as an approximation for calculating weight gradients with a straight-through-estimator (STE) (Bengio et al., 2013). At each forward pass, the weight Wl of layer l is drawn from an i.i.d. Gaussian distribution N (Wl;Wl0, σ2N,lI). The noise is referenced to the range of representable weights W lmax −W lmin in that particular layer
σN,l = η(W l max −W lmin), (8)
where η is a coefficient characterizing the noise level. During back propagation, gradients are calculated with clean weights Wl0, and only W l 0 gets updated by applying the gradient. W l max and W l min are hyperparameters which can be chosen with information on the weight distributions.
Knowledge distillation was introduced by Hinton et al. (2015) as a way for training a smaller student model using a larger model as the teacher. For an input to the neural network x, the teacher model generates logits zTi , which are then turned into a probability vector by the softmax layer
qTi = σ(z T i ;T ) = exp(zTi /T )∑ j exp(z T j /T ) . (9)
The temperature, T , controls the softness of the probabilities. The teacher network can generate softer labels for the student network by raising the temperature T . We propose to use a noise free clean model as the teacher to train a noisy student network. The student network is trained with noise injection to match a mix of hard targets and soft targets generated by the teacher. Logits generated by the student network are denoted as zSi . A loss function with distillation for the student model can be written as
L(x;WS;T ) = H(σ(zSi ;T = 1), ytrue) + αT 2H(σ(zSi ;T ), qTi ) +R(WS0). (10)
Here H is cross-entropy loss, ytrue is the one-hot encoding of the ground truth, and R is the L2regularization term. Parameter α balances relative strength between hard and soft targets. We follow the original implementation in Hinton et al. (2015), which includes a T 2 factor in front of the soft target loss to balance gradients generated from different targets. The student model is then trained
with Gaussian noise injection using this distillation loss function. The vanilla noise injection training corresponds to the case where α = 0. If the range of weights is not constrained and the noise reference is fixed, the network soon learns that the most effective way to decrease the loss is to increase the amplitude of the weights, which increases the effective SNR. There are two possible ways to deal with this problem. Firstly, the noise reference could be re-calculated after each weight update, thus updating the noise power. Secondly, we can constrain the range of weights by clipping them to the range [W lmin,W l max], and use a fixed noise model during training. We found that in general the second method of fixing the range of weights and training for a specific noise yields more stable training and better results. Therefore, this is the training method that we adopt in this paper. A schematic of our proposed method is shown in Figure 5 of the Appendix.
During training, a clean model is first trained to its full accuracy and then weight clipping is applied to clip weights in the range [W lmin,W l max]. The specific range is chosen based on statistics of the weights. Fine-tuning is then applied to bring the weight-clipped clean model back to full accuracy. This model is then used as the teacher to generate soft targets. The noisy student network is initialized with the same weights as the teacher. This can be considered as a warm start to accelerate retraining. As we discussed earlier, the range of weights is fixed during training, and the noise injected into the student model is referenced to this range.
Our method also supports training for low precision noisy models. Quantization reflects finite precision conversion between analog and digital domains in an analog accelerator. Weights are uniformly quantized in the range [W lmin,W l max] before being exposed to noise. In a given layer, the input activations are quantized before being multiplied by noisy weights. The output results of the matrix multiplication are also quantized before adding biases and performing batch normalization, which are considered to happen in digital domain. When training with quantization, the straight-throughestimator is assumed when calculating gradients with back propagation.
5.2 EXPERIMENTAL RESULTS
In order to establish the effectiveness of our proposed method, experiments are performed for different networks and datasets. In this section we mainly focus on bigger datasets and models, while results on LeNet and its variants with some discussion of network architecture effect can be found in Figure 6 of the Appendix. ResNets are a family of convolutional neural networks proposed by He et al. (2016), which have gained great popularity in computer vision applications. In fact, many other deep neural networks also use ResNet-like cells as their building blocks. ResNets are often used as industry standard benchmark models to test hardware performance. The first set of experiments we present consist of a ResNet-32 model trained on the CIFAR10 dataset. In order to compare fairly with the previous work, we follow the implementation in Joshi et al. (2019), and consider a ResNet32(v1) model on CIFAR10 with weight clipping in the range [−2σW,l, 2σW,l]. The teacher model is trained to an accuracy of 93.845% using stochastic gradient descent with cosine learning rate decay (Loshchilov & Hutter, 2016), and an initial learning rate of 0.1 (batch size is 128). The network is then retrained with noise injection to make it robust against noise. Retraining takes place for 150 epochs, the initial learning rate is 0.01 and decays with the same cosine profile. We performed two sets of retraining, one without distillation in the loss (α = 0), and another with distillation loss (α = 1). Everything else was kept equal in these retraining runs. Five different noise levels are tested with five different values of η: {0.02, 0.04, 0.057, 0.073, 0.11}. Results are shown in Figure 3(a). Every retraining run was performed twice and inference was performed 50 times on the test dataset for one model, to generate statistically significant results. Temperature was set to T = 6 for the runs with distillation. We found that an intermediate temperature between 2 and 10 produces better results. The pretrained model without any retraining performs very poorly at inference time when noise is present. Retraining with Gaussian noise injection can effectively recover some accuracy, which we confirm as reported in Joshi et al. (2019). Our method of combining noise injection with knowledge distillation from the clean model further improves noise resilience by about 40% in terms of η, which is an improvement of almost 2× in terms of noise power σ2N .
The actual noise level in a given device can only be estimated, and will vary from one device to another and even fluctuate depending on the physical environment in which it operates (Section 3). Therefore, it is important that any method to enhance noise robustness can tolerate a range of noise
levels. Our method offers improved noise robustness, even when the actual noise at inference time is different from that injected at training time. It is shown in Figure 3(b) that the model obtained from distillation is more accurate and less sensitive to noise level differences between training and inference time. This holds for a range of different inference noise levels around the training level. In the previous experiments, we assume a fixed noise level parameterized by η. On real analog hardware, there could be additional non-idealities such as variation in noise level due to temperature fluctuation and nonuniform noise profile on different NVM cells due to statistical variation in the manufacturing process. We have conducted additional experiments to account for these effects.
Results from the experiments are shown in Table 1. Temporal fluctuation represents noise level variation over time. Noise η is randomly sampled from N (η; η0, σ2η) for each inference batch. A noise temporal fluctuation level of 10% means that ση = 0.1η0. Spatial noise level fluctuation introduces nonuniform diagonal terms in the noise covariance matrix. More concretely, each weight noise in our previous model is multiplied by a scale factor λw with λw drawn from a Gaussian distribution N (λw; 1, σ2λ). A noise spatial fluctuation level of 10% means that σλ = 0.1. The scale factors are generated and then fixed when the network is instantiated, therefore the noise during network inference is non i.i.d. in this case. Results from our experiments show that there is no significant deviation when a combination of these non-ideal noise effects are taken into account.
The performance of our training method is also validated with quantization. A ResNet-18(v2) model is trained with quantization to 4-bit precision (ENOB) for both weights and activations. This corresponds to 4-bit precision conversions between digital and analog domains. A subset of training
data is passed through the full precision model to calibrate the range for quantization – we choose the 0.1% and 99.9% percentiles as qmin and qmax for the quantizer. This range of quantization is fixed throughout training. The quantized model achieves an accuracy of 92.91% on the test dataset when no noise is present. The model is then re-trained for noise robustness. The noise level is referenced to the range of quantization of weights in one particular layer, such that W lmin = qmin,l and W lmax = qmax,l. Results are shown for the same set of η values in Figure 4(a). In the distillation retraining runs, the full-precision clean model with an accuracy of 93.87% is used as the teacher and temperature is set to T = 6. Due to extra loss in precision imposed by aggressive quantization, accuracy of the pretrained quantized model drops sharply with noise. At η = 0.057, the model accuracy drops to 87.5% without retraining and further down to 80.9% at η = 0.073. Even retraining with noise injection struggles, and the model retrained with only noise injection achieves an accuracy of 90.34% at η = 0.073. Our method of combining noise injection and distillation stands out by keeping the accuracy loss within 1% from the baseline up to a noise level of η ' 0.07.
One interesting aspect of using distillation loss during retraining with noise can be seen in Figure 4(b). The evolution of model accuracy on the test dataset is shown. When no distillation loss is used, the model suffers an accuracy drop (difference between blue and orange curves) around 2.08% when tested with noise. The drop (difference between green and red curves) is significantly reduced to around 0.6% when distillation loss is used. This observation indicates that training with distillation favors solutions that are less sensitive to noise. The final model obtained with distillation is actually slightly worse when there is no noise at inference time but becomes superior when noise is present.
Results on the ImageNet dataset for a ResNet-50(v1) network are shown in Table 2 to demonstrate that our proposed approach scales to a large-scale dataset and a deep model. A ResNet-50 model is first trained to an accuracy of 74.942% with weight clipping in the range [−2σW,l, 2σW,l]. This range is fixed as the reference for added noise. For ResNet-50 on ImageNet, only three different noise levels are explored, and the accuracy degrades very quickly beyond the noise level η = 0.06, as the model and the task are considerably more complex. Retraining runs for 30 epochs with an initial learning rate of 0.001 and cosine learning rate decay with a batch size of 32. For distillation, we used α = 1 and T = 6 as in previous experiments. Results are collected for two independent training runs in each setting and 50 inference runs over the entire test dataset. The findings confirm that training with distillation and noise injection consistently delivers more noise robust models. The accuracy uplift benefit also markedly increases with noise.
6 DISCUSSION
Effects of distillation Knowledge distillation is a proven technique to transfer knowledge from a larger teacher model to a smaller, lower capacity student model. This paper shows, for the first time, that distillation is also an effective way to transfer knowledge between a clean model and its noisy
counterpart, with the novel approach of combining distillation with noise injection during training. We give some intuition for understanding this effect with the help of Section 4.2: a noisy neural network can be viewed as a model with reduced learning capacity by the loss of mutual information argument. Distillation is therefore acting to help reduce this capacity gap.
In our experiments, distillation shows great benefit in helping the network to converge to a good solution, even with a high level of noise injected in the forward propagation step. Here, we attempt to explain this effect by the reduced sensitivity of distillation loss. An influential work by Papernot et al. (2016) shows that distillation can be used to reduce the model sensitivity with respect to its input perturbations thus defending against some adversarial attacks. We argue that distillation can achieve a similar effect for the weights of the network. Taking the derivative of the i-th output of the student network qSi at temperature T with respect to a weight w yields
∂qSi ∂w = 1 T exp(zi/T )(∑ j exp(zj/T ) )2 ∑ j exp(zj/T ) ( ∂zi ∂w − ∂zj ∂w ) . (11)
The 1/T scaling makes the output less sensitive to weight perturbation at higher temperature, thus potentially stabilizing the training when noise is injected into weights during forward propagation. We plan to work on a more formal analysis of this argument in our future work.
Hardware Performance Benefits The improvements in noise tolerance of neural networks demonstrated in this work have a potential impact on the design of practical analog hardware accelerators for neural network inference. Increased robustness to noisy computation at the model training level potentially means that the specification of the analog hardware can be relaxed. In turn, this can make it easier to achieve the hardware specification, or even allow optimizations to further reduce the energy consumption. An in-depth discussion of the trade-off between compute noise performance and hardware energy dissipation is beyond the scope of this paper, but we refer the interested reader to Rekhi et al. (2019) for more details. In summary, we believe that machine learning research will be a key enabler for practical analog hardware accelerators.
7 CONCLUSION
Analog hardware holds the potential to significantly reduce the latency and energy consumption of neural network inference. However, analog hardware is imprecise and introduces noise during computation that limits accuracy in practice. This paper explored the training of noisy neural networks, which suffer from reduced capacity leading to accuracy loss. We propose a training methodology that trains neural networks via distillation and noise injection to increase the accuracy of models under noisy computation. Experimental results across a range of models and datasets, including ImageNet, demonstrate that this approach can almost double the network noise tolerance compared with the previous best reported values, without any changes to the model itself beyond the training method. With these improvements in the accuracy of noisy neural networks, we hope to enable the implementation of analog inference hardware in the near future. | 1. What is the reviewer's opinion of the paper's introduction and abstract?
2. What does the reviewer think about the paper's approach to analyzing neural networks using analog hardware?
3. What are the reviewer's concerns regarding the noise model used in the paper?
4. How does the reviewer assess the paper's theoretical contribution?
5. Does the reviewer believe the paper is suitable for publication at ICLR? | Review | Review
I started reading the paper with high hopes. The abstract and the introduction were set up nicely and I was quite intrigued to see a theoretical analysis and practical implementation of a neural network using analogue hardware. However, as I read through the paper carefully multiple times, I realized that the expository opening description fails to live up to the standard it sets.
To be more specific, I did not enjoy the elaborate description of noisy analogue conductances as the noise models analyzed in the paper are not tested on any of such devices. The noise model introduced is fairly simplistic and arguably, the real-world systems are much more complex compared to such simplistic assumptions. The authors could have presented the paper as an analysis of knowledge distillation in neural network training. Even if the paper were presented that way, I would have doubted its chance of acceptance due to the incrementality of the theoretical contribution.
All in all, I believe this is a very promising direction to invest, but the paper is not quite ready for ICLR. |
ICLR | Title
Noisy Machines: Understanding noisy neural networks and enhancing robustness to analog hardware errors using distillation
Abstract
The success of deep learning has brought forth a wave of interest in computer hardware design to better meet the high demands of neural network inference. In particular, analog computing hardware has been heavily motivated specifically for accelerating neural networks, based on either electronic, optical or photonic devices, which may well achieve lower power consumption than conventional digital electronics. However, these proposed analog accelerators suffer from the intrinsic noise generated by their physical components, which makes it challenging to achieve high accuracy on deep neural networks. Hence, for successful deployment on analog accelerators, it is essential to be able to train deep neural networks to be robust to random continuous noise in the network weights, which is a somewhat new challenge in machine learning. In this paper, we advance the understanding of noisy neural networks. We outline how a noisy neural network has reduced learning capacity as a result of loss of mutual information between its input and output. To combat this, we propose using knowledge distillation combined with noise injection during training to achieve more noise robust networks, which is demonstrated experimentally across different networks and datasets, including ImageNet. Our method achieves models with as much as ∼ 2× greater noise tolerance compared with the previous best attempts, which is a significant step towards making analog hardware practical for deep learning.
N/A
The success of deep learning has brought forth a wave of interest in computer hardware design to better meet the high demands of neural network inference. In particular, analog computing hardware has been heavily motivated specifically for accelerating neural networks, based on either electronic, optical or photonic devices, which may well achieve lower power consumption than conventional digital electronics. However, these proposed analog accelerators suffer from the intrinsic noise generated by their physical components, which makes it challenging to achieve high accuracy on deep neural networks. Hence, for successful deployment on analog accelerators, it is essential to be able to train deep neural networks to be robust to random continuous noise in the network weights, which is a somewhat new challenge in machine learning. In this paper, we advance the understanding of noisy neural networks. We outline how a noisy neural network has reduced learning capacity as a result of loss of mutual information between its input and output. To combat this, we propose using knowledge distillation combined with noise injection during training to achieve more noise robust networks, which is demonstrated experimentally across different networks and datasets, including ImageNet. Our method achieves models with as much as ∼ 2× greater noise tolerance compared with the previous best attempts, which is a significant step towards making analog hardware practical for deep learning.
1 INTRODUCTION
Deep neural networks (DNNs) have achieved unprecedented performance over a wide variety of tasks such as computer vision, speech recognition, and natural language processing. However, DNN inference is typically very demanding in terms of compute and memory resources. Consequently, larger models are often not well suited for large-scale deployment on edge devices, which typically have meagre performance and power budgets, especially battery powered mobile and IoT devices. To address these issues, the design of specialized hardware for DNN inference has drawn great interest, and is an extremely active area of research. To date, a plethora of techniques have been proposed for designing efficient neural network hardware (Sze et al., 2017).
In contrast to the current status quo of predominantly digital hardware, there is significant research interest in analog hardware for DNN inference. In this approach, digital values are represented by analog quantities such as electrical voltages or light pulses, and the computation itself (e.g., multiplication and addition) proceeds in the analog domain, before eventually being converted back to digital. Analog accelerators take advantage of particular efficiencies of analog computation in exchange for losing the bit-exact precision of digital. In other words, analog compute is cheap but somewhat imprecise. Analog computation has been demonstrated in the context of DNN inference in both electronic (Binas et al., 2016), photonic (Shen et al., 2017) and optical (Lin et al., 2018) systems. Analog accelerators promise to deliver at least two orders of magnitude better performance over a conventional digital processor for deep learning workloads in both speed (Shen et al., 2017) and energy efficiency (Ni et al., 2017). Electronic analog DNN accelerators are arguably the most mature technology and hence will be our focus in this work.
The most common approach to electronic analog DNN accelerator is in-memory computing, which typically uses non-volatile memory (NVM) crossbar arrays to encode the network weights as analog values. The NVM itself can be implemented with memristive devices, such as metal-oxide resistive random-access memory (ReRAM) (Hu et al., 2018) or phase-change memory (PCM) (Le Gallo et al., 2018; Boybat et al., 2018; Ambrogio et al., 2018). The matrix-vector operations computed during inference are then performed in parallel inside the crossbar array, operating on analog quantities for weights and activations. For example, addition of two quantities encoded as electrical currents can be achieved by simply connecting the two wires together, whereby the currents will add linearly according to Kirchhoff’s current law. In this case, there is almost zero latency or energy dissipation for this operation.
Similarly, multiplication with a weight can be achieved by programming the NVM cell conductance to the weight value, which is then used to convert an input activation encoded as a voltage into a scaled current, following Ohm’s law. Therefore, the analog approach promises significantly improved throughput and energy efficiency. However, the analog nature of the weights makes the compute noisy, which can limit inference accuracy. For example, a simple two-layer fully-connected network with a baseline accuracy of 91.7% on digital hardware, achieves only 76.7% when implemented on an analog photonic array (Shen et al., 2017). This kind of accuracy degradation is not acceptable for most deep learning applications. Therefore, the challenge of imprecise analog hardware motivates us to study and understand noisy neural networks, in order to maintain inference accuracy under noisy analog computation.
The question of how to effectively learn and compute with a noisy machine is a long-standing problem of interest in machine learning and computer science (Stevenson et al., 1990; Von Neumann, 1956). In this paper, we study noisy neural networks to understand their inference performance. We also demonstrate how to train a neural network with distillation and noise injection to make it more resilient to computation noise, enabling higher inference accuracy for models deployed on analog hardware. We present empirical results that demonstrate state-of-the-art noise tolerance on multiple datasets, including ImageNet.
The remainder of the paper is organized as follows. Section 2 gives an overview of related work. Section 3 outlines the problem statement. Section 4 presents a more formal analysis of noisy neural networks. Section 5 gives a distillation methodology for training noisy neural networks, with experimental results. Finally, Section 6 provides a brief discussion and Section 7 closes with concluding remarks.
2 RELATED WORK
Previous work broadly falls under the following categories: studying the effect of analog computation noise, analysis of noise-injection for DNNs, and use of distillation in model training.
Analog Computation Noise Models In Rekhi et al. (2019), the noise due to analog computation is modeled as additive parameter noise with zero-mean Gaussian distribution. The variance of this Gaussian is a function of the effective number of bits of the output of an analog computation. Similarly, the authors in Joshi et al. (2019) also model analog computation noise as additive Gaussian noise on the parameters, where the variance is proportional to the range of values that their PCM device can represent. Some noise models presented have included a more detailed account of device-level interactions, such as voltage drop across the analog array (Jain et al., 2018; Feinberg et al., 2018), but are beyond the scope of this paper. In this work, we consider an additive Gaussian noise model on the weights, similar to Rekhi et al. (2019); Joshi et al. (2019) and present a novel training method that outperforms the previous work in model noise resilience.
Noise Injection for Neural Networks Several stochastic regularization techniques based on noise-injection and dropout (Srivastava et al., 2014; Noh et al., 2017; Li & Liu, 2016) have been demonstrated to be highly effective at reducing overfitting. For generalized linear models, dropout and additive noise have been shown to be equivalent to adaptive L2 regularization to a first order (Wager et al., 2013). Training networks with Gaussian noise added to the weights or activations can also increase robustness to variety of adversarial attacks (Rakin et al., 2018). Bayesian neural networks replace deterministic weights with distributions in order to optimize over the posterior
distribution of the weights (Kingma & Welling, 2013). Many of these methods use noise injection at inference time to approximate weight distribution; in Gal & Ghahramani (2016) a link between Gaussian processes and dropout is established in an effort to model the uncertainty of the output of a network. A theoretical analysis by Stevenson et al. (1990) has shown that for neural networks with adaptive linear neurons, the probability of error of a noisy neural network classifier with weight noise increases with the number of layers, but largely independent of the number of weights per neuron or neurons per layer.
Distillation in Training Knowledge distillation (Hinton et al., 2015) is a well known technique in which the soft labels produced by a teacher model are used to train a student model which typically has reduced capacity. Distillation has shown merit for improving model performance across a range of scenarios, including student models lacking access to portions of training data (Micaelli & Storkey, 2019), quantized low-precision networks (Polino et al., 2018; Mishra & Marr, 2017), protection against adversarial attacks (Papernot et al., 2016; Goldblum et al., 2019), and in avoiding catastrophic forgetting for multi-task learning (Schwarz et al., 2018). To the best of our knowledge, our work is the first to combine distillation with noise injection in training to enhance model noise robustness.
3 PROBLEM STATEMENT
Without loss of generality, we model a general noisy machine after a simple memristive crossbar array, similar to Shafiee et al. (2016). Figure 1 illustrates how an arbitrary neural network layer, l, such as a typical 3× 3 convolution, can be mapped to this hardware substrate by first flattening the weights into a single large 2D matrix, Wl, and then programming each element of this matrix into a memristive cell in the crossbar array, which provides the required conductances Gl (the reciprocal of resistance) to perform analog multiplication following Ohm’s law, iout = vinG. Note that a pair of differential pair of NVM devices are typically used to represent a signed quantity in Gl. Subsequently, input activations, xl converted into continuous voltages, v(xl), are streamed into the array rows from the left-hand side. The memristive devices connect row with columns, where the row voltages are converted into currents scaled by the programmed conductance, G, to generate the currents i(yl), which are differential in order to represent both positive and negative quantites with unipolar signals. The currents from each memristive device essentially add up for free where they are connected in the columns, according to Kirchhoff’s current law. Finally, the differential currents are converted to bipolar voltages, v(yl), which are they digitized before adding bias, and performing batch normalization and ReLU operations, which are not shown in Figure 1.
However, the analog inference hardware of Figure 1 is subject to real-world non-idealities, typically attributed to variations in: 1) manufacturing process, 2) supply voltage and 3) temperature, PVT variation collectively, all of which result in noise in the system. Below we discuss the two key components in terms of analog noise modeling.
Data Converters. Digital-to-analog converter (DAC) and analog-to-digital converter (ADC) circuits are designed to be robust to PVT variation, but in practice these effects do degrade the resolution (i.e. number of bits). Therefore, we consider effective number of bits (ENOB), which is a lower bound on resolution in the presence of non-idealities. Hence, we use activation and weight quantization with ENOB data converters and no additional converter noise modeling.
NVM cells. Due to their analog nature, memristive NVM cells have limited precision, due to the read and write circuitry (Joshi et al., 2019). In between write and read operations, their stored value is prone to drift over time. Long-term drift can be corrected with periodic refresh operations. At shorter timescales, time-varying noise may be encountered. For most of the experiments in this paper, we model generic NVM cell noise as an additive zero-mean i.i.d. Gaussian error term on the weights of the model in each particular layer ∆Wl ∼ N (∆Wl; 0, σ2N,lI). This simple model, described more concretely in Section 5, is similar to that used by Joshi et al. (2019) which was verified on real hardware. In addition, we also investigate spatially-varying and time-varying noise models in Section 5.2 (Table 1).
4 ANALYSIS OF NOISY NEURAL NETWORKS
4.1 BIAS VARIANCE DECOMPOSITION FOR NOISY WEIGHTS
Naively deploying an off-the-shelf pretrained model on a noisy accelerator will yield poor accuracy for a fundamental reason. Consider a neural network f(W;x) with weights W that maps an input x ∈ Rn to an output y ∈ R. In the framework of statistical learning, x and y are considered to be randomly distributed following a joint probability distribution p(x, y). In a noisy neural network, the weights W are also randomly distributed, with distribution p(W). The expected Mean Squared Error (MSE) of this noisy neural network can be decomposed as
E(x,y)∼p(x,y),W∼p(W)[(f(W;x)− y)2] =E(x,y)∼p(x,y),W∼p(W)[(f(W;x)− EW∼p(W)[f(W;x)] + EW∼p(W)[f(W;x)]− y)2] =Ex∼p(x)[EW∼p(W)[(f(W;x)− EW∼p(W)[f(W;x)])2]]
+ E(x,y)∼p(x,y)[(EW∼p(W)[f(W;x)]− y)2]. (1)
The first term on the right hand side of Equation 1 is a variance loss term due to randomness in the weights and is denoted as lvar. The second term is a squared bias loss term which we call lbias. However, typically a model is trained to minimize the empirical version of expected loss lpretrained = E(x,y)∼p(x,y)[(f(E[W];x) − y)2]. We assume that the noise is centered such that pretrained weights are equal to E[W]. A pretrained model is therefore optimized for the wrong loss function when deployed on a noisy accelerator. To show this in a more concrete way, a baseline LeNet model (32 filters in the first convolutional layer, 64 filters in the second convolutional layer and 1024 neurons in the fully-connected layer) (LeCun et al., 1998) is trained on MNIST dataset to 99.19% accuracy and then exposed to Gaussian noise in its weights, numerical values of these loss terms can be estimated. The expected value of the network output EW[f(W;x)] is estimated by averaging over outputs of different instances of the network for the same input x. We perform inference on n = 100 different instances of the network and estimate the loss terms as
f(W;x) = EW∼p(W)[f(W;x)] ' 1
n n∑ i=1 f(Wi;x), (2)
l̂var = 1
N N∑ j=1 1 n n∑ i=1 (f(Wi;xj)− f(W;xj))2, (3)
l̂bias = 1
N N∑ j=1 (f(W;xj)− yj)2, (4)
l̂pretrained = 1
N N∑ j=1 (f(E[W];xj)− yj)2. (5)
The above formulas are for a network with a scalar output. They can be easily extended to the vector output case by averaging over all outputs. In the LeNet example, we take the output of softmax layer to calculate squared losses. The noise is assumed i.i.d. Gaussian centered around zero with a fixed SNR σ2W,l/σ 2 N,l in each layer l. The numerical values of the above losses are estimated using
the entire test dataset for different noise levels. Results are shown in Figure 2(a). l̂bias is initially equal to l̂pretrained and l̂var = 0 when there is no noise. However, as noise level rises, they increase in magnitude and become much more important than l̂pretrained. l̂var overtakes l̂bias to become the predominant loss term in a noisy LeNet at σN/σW ' 0.6. It is useful to note that lbias increases with noise entirely due to nonlinearity in the network, which is ReLU in the case of LeNet. In a linear model, lbias should be equal to lpretrained as we would have f(E[W];x) = E[f(W;x)]. A model trained in a conventional manner is thus not optimized for the real loss it is going to encounter on a noisy accelerator. Special retraining is required to improve its noise tolerance. In Figure 2(a), we show how the model accuracy degrades with a rising noise level for the baseline LeNet and its deeper and wider variants. The deeper network is obtained by stacking two more convolutional layers of width 16 in front of the baseline network and the wider network is obtained by increasing the widths of each layer in the baseline to 128, 256, 2048 respectively. Performance degradation due to noise is worse for the deeper variant and less severe for the wider one. A more detailed discussion of the network architecture effect on its performance under noise is offered in Section 4.2
4.2 LOSS OF INFORMATION IN A NOISY NEURAL NETWORK
Information theory offers useful tools to study noise in neural networks. Mutual information I(X;Y ) characterizes the amount of information obtained on random variable X by observing another random variable Y . The mutual information between X and Y can be related to Shannon entropy by I(X;Y ) = H(Y )−H(Y |X). (6) Mutual information has been used to understand DNNs (Tishby & Zaslavsky, 2015; Saxe et al., 2018). Treating a noisy neural network as a noisy information channel, we can show how information about the input to the neural network diminishes as it propagates through the noisy computation. In this subsection, X is the input to the neural network and Y is the output. Mutual information is estimated for the baseline LeNet model and its variants using Equation 6. When there is no noise, the term H(Y |X) is zero as Y is deterministic once the input to the network X is known, therefore I(X;Y ) is just H(Y ) in this case. Shannon entropy H(Y ) can be estimated using a standard discrete binning approach (Saxe et al., 2018). In our experiment, Y is the output of the softmax layer
which is a vector of length 10. Entropy H(Y ) is estimated using four bins per coordinate of Y by
Ĥ(Y ) = − N∑ i=1 pi log(pi), (7)
where pi is the probability that an output falls in the bin i. When noise is introduced to the weights, the conditional entropy H(Y |X) is estimated by fixing the input X = x and performing multiple noisy inferences to calculate Ĥ(Y |X = x) with the above binning approach. Ĥ(Y |X = x) is then averaged over different input x to obtain Ĥ(Y |X). This estimate is performed for LeNet and its variants with different noise levels. Results are shown in Figure 2(b). The values are normalized to the estimate of I(X;Y ) at zero noise. Mutual information between the input and the output decays towards zero with increasing noise in network weights. Furthermore, mutual information in a deeper and narrower network decays faster than in a shallower and wider network. Intuitively, information from the input undergoes more noisy compute when more layers are added to the network, while a wider network has more redundant paths for the information to flow, thus better preserving it. An information theoretic bound of mutual information decay as a function of network depth and width in a noisy neural network will be treated in our follow-up work. Overall, noise is damaging the learning capacity of the network. When the output of the model contains no information from its input, the network loses all ability to learn. For a noise level that is not so extreme, a significant amount of mutual information remains, which indicates that useful learning is possible even with a noisy model.
5 COMBINING NOISE INJECTION AND KNOWLEDGE DISTILLATION
5.1 METHODOLOGY
Noise injection during training is one way of exposing network training to a more realistic loss as randomly perturbing weights simulates what happens in a real noisy analog device, and forces the network to adapt to noise during training. Noise injection only happens in training during forward propagation, which can be considered as an approximation for calculating weight gradients with a straight-through-estimator (STE) (Bengio et al., 2013). At each forward pass, the weight Wl of layer l is drawn from an i.i.d. Gaussian distribution N (Wl;Wl0, σ2N,lI). The noise is referenced to the range of representable weights W lmax −W lmin in that particular layer
σN,l = η(W l max −W lmin), (8)
where η is a coefficient characterizing the noise level. During back propagation, gradients are calculated with clean weights Wl0, and only W l 0 gets updated by applying the gradient. W l max and W l min are hyperparameters which can be chosen with information on the weight distributions.
Knowledge distillation was introduced by Hinton et al. (2015) as a way for training a smaller student model using a larger model as the teacher. For an input to the neural network x, the teacher model generates logits zTi , which are then turned into a probability vector by the softmax layer
qTi = σ(z T i ;T ) = exp(zTi /T )∑ j exp(z T j /T ) . (9)
The temperature, T , controls the softness of the probabilities. The teacher network can generate softer labels for the student network by raising the temperature T . We propose to use a noise free clean model as the teacher to train a noisy student network. The student network is trained with noise injection to match a mix of hard targets and soft targets generated by the teacher. Logits generated by the student network are denoted as zSi . A loss function with distillation for the student model can be written as
L(x;WS;T ) = H(σ(zSi ;T = 1), ytrue) + αT 2H(σ(zSi ;T ), qTi ) +R(WS0). (10)
Here H is cross-entropy loss, ytrue is the one-hot encoding of the ground truth, and R is the L2regularization term. Parameter α balances relative strength between hard and soft targets. We follow the original implementation in Hinton et al. (2015), which includes a T 2 factor in front of the soft target loss to balance gradients generated from different targets. The student model is then trained
with Gaussian noise injection using this distillation loss function. The vanilla noise injection training corresponds to the case where α = 0. If the range of weights is not constrained and the noise reference is fixed, the network soon learns that the most effective way to decrease the loss is to increase the amplitude of the weights, which increases the effective SNR. There are two possible ways to deal with this problem. Firstly, the noise reference could be re-calculated after each weight update, thus updating the noise power. Secondly, we can constrain the range of weights by clipping them to the range [W lmin,W l max], and use a fixed noise model during training. We found that in general the second method of fixing the range of weights and training for a specific noise yields more stable training and better results. Therefore, this is the training method that we adopt in this paper. A schematic of our proposed method is shown in Figure 5 of the Appendix.
During training, a clean model is first trained to its full accuracy and then weight clipping is applied to clip weights in the range [W lmin,W l max]. The specific range is chosen based on statistics of the weights. Fine-tuning is then applied to bring the weight-clipped clean model back to full accuracy. This model is then used as the teacher to generate soft targets. The noisy student network is initialized with the same weights as the teacher. This can be considered as a warm start to accelerate retraining. As we discussed earlier, the range of weights is fixed during training, and the noise injected into the student model is referenced to this range.
Our method also supports training for low precision noisy models. Quantization reflects finite precision conversion between analog and digital domains in an analog accelerator. Weights are uniformly quantized in the range [W lmin,W l max] before being exposed to noise. In a given layer, the input activations are quantized before being multiplied by noisy weights. The output results of the matrix multiplication are also quantized before adding biases and performing batch normalization, which are considered to happen in digital domain. When training with quantization, the straight-throughestimator is assumed when calculating gradients with back propagation.
5.2 EXPERIMENTAL RESULTS
In order to establish the effectiveness of our proposed method, experiments are performed for different networks and datasets. In this section we mainly focus on bigger datasets and models, while results on LeNet and its variants with some discussion of network architecture effect can be found in Figure 6 of the Appendix. ResNets are a family of convolutional neural networks proposed by He et al. (2016), which have gained great popularity in computer vision applications. In fact, many other deep neural networks also use ResNet-like cells as their building blocks. ResNets are often used as industry standard benchmark models to test hardware performance. The first set of experiments we present consist of a ResNet-32 model trained on the CIFAR10 dataset. In order to compare fairly with the previous work, we follow the implementation in Joshi et al. (2019), and consider a ResNet32(v1) model on CIFAR10 with weight clipping in the range [−2σW,l, 2σW,l]. The teacher model is trained to an accuracy of 93.845% using stochastic gradient descent with cosine learning rate decay (Loshchilov & Hutter, 2016), and an initial learning rate of 0.1 (batch size is 128). The network is then retrained with noise injection to make it robust against noise. Retraining takes place for 150 epochs, the initial learning rate is 0.01 and decays with the same cosine profile. We performed two sets of retraining, one without distillation in the loss (α = 0), and another with distillation loss (α = 1). Everything else was kept equal in these retraining runs. Five different noise levels are tested with five different values of η: {0.02, 0.04, 0.057, 0.073, 0.11}. Results are shown in Figure 3(a). Every retraining run was performed twice and inference was performed 50 times on the test dataset for one model, to generate statistically significant results. Temperature was set to T = 6 for the runs with distillation. We found that an intermediate temperature between 2 and 10 produces better results. The pretrained model without any retraining performs very poorly at inference time when noise is present. Retraining with Gaussian noise injection can effectively recover some accuracy, which we confirm as reported in Joshi et al. (2019). Our method of combining noise injection with knowledge distillation from the clean model further improves noise resilience by about 40% in terms of η, which is an improvement of almost 2× in terms of noise power σ2N .
The actual noise level in a given device can only be estimated, and will vary from one device to another and even fluctuate depending on the physical environment in which it operates (Section 3). Therefore, it is important that any method to enhance noise robustness can tolerate a range of noise
levels. Our method offers improved noise robustness, even when the actual noise at inference time is different from that injected at training time. It is shown in Figure 3(b) that the model obtained from distillation is more accurate and less sensitive to noise level differences between training and inference time. This holds for a range of different inference noise levels around the training level. In the previous experiments, we assume a fixed noise level parameterized by η. On real analog hardware, there could be additional non-idealities such as variation in noise level due to temperature fluctuation and nonuniform noise profile on different NVM cells due to statistical variation in the manufacturing process. We have conducted additional experiments to account for these effects.
Results from the experiments are shown in Table 1. Temporal fluctuation represents noise level variation over time. Noise η is randomly sampled from N (η; η0, σ2η) for each inference batch. A noise temporal fluctuation level of 10% means that ση = 0.1η0. Spatial noise level fluctuation introduces nonuniform diagonal terms in the noise covariance matrix. More concretely, each weight noise in our previous model is multiplied by a scale factor λw with λw drawn from a Gaussian distribution N (λw; 1, σ2λ). A noise spatial fluctuation level of 10% means that σλ = 0.1. The scale factors are generated and then fixed when the network is instantiated, therefore the noise during network inference is non i.i.d. in this case. Results from our experiments show that there is no significant deviation when a combination of these non-ideal noise effects are taken into account.
The performance of our training method is also validated with quantization. A ResNet-18(v2) model is trained with quantization to 4-bit precision (ENOB) for both weights and activations. This corresponds to 4-bit precision conversions between digital and analog domains. A subset of training
data is passed through the full precision model to calibrate the range for quantization – we choose the 0.1% and 99.9% percentiles as qmin and qmax for the quantizer. This range of quantization is fixed throughout training. The quantized model achieves an accuracy of 92.91% on the test dataset when no noise is present. The model is then re-trained for noise robustness. The noise level is referenced to the range of quantization of weights in one particular layer, such that W lmin = qmin,l and W lmax = qmax,l. Results are shown for the same set of η values in Figure 4(a). In the distillation retraining runs, the full-precision clean model with an accuracy of 93.87% is used as the teacher and temperature is set to T = 6. Due to extra loss in precision imposed by aggressive quantization, accuracy of the pretrained quantized model drops sharply with noise. At η = 0.057, the model accuracy drops to 87.5% without retraining and further down to 80.9% at η = 0.073. Even retraining with noise injection struggles, and the model retrained with only noise injection achieves an accuracy of 90.34% at η = 0.073. Our method of combining noise injection and distillation stands out by keeping the accuracy loss within 1% from the baseline up to a noise level of η ' 0.07.
One interesting aspect of using distillation loss during retraining with noise can be seen in Figure 4(b). The evolution of model accuracy on the test dataset is shown. When no distillation loss is used, the model suffers an accuracy drop (difference between blue and orange curves) around 2.08% when tested with noise. The drop (difference between green and red curves) is significantly reduced to around 0.6% when distillation loss is used. This observation indicates that training with distillation favors solutions that are less sensitive to noise. The final model obtained with distillation is actually slightly worse when there is no noise at inference time but becomes superior when noise is present.
Results on the ImageNet dataset for a ResNet-50(v1) network are shown in Table 2 to demonstrate that our proposed approach scales to a large-scale dataset and a deep model. A ResNet-50 model is first trained to an accuracy of 74.942% with weight clipping in the range [−2σW,l, 2σW,l]. This range is fixed as the reference for added noise. For ResNet-50 on ImageNet, only three different noise levels are explored, and the accuracy degrades very quickly beyond the noise level η = 0.06, as the model and the task are considerably more complex. Retraining runs for 30 epochs with an initial learning rate of 0.001 and cosine learning rate decay with a batch size of 32. For distillation, we used α = 1 and T = 6 as in previous experiments. Results are collected for two independent training runs in each setting and 50 inference runs over the entire test dataset. The findings confirm that training with distillation and noise injection consistently delivers more noise robust models. The accuracy uplift benefit also markedly increases with noise.
6 DISCUSSION
Effects of distillation Knowledge distillation is a proven technique to transfer knowledge from a larger teacher model to a smaller, lower capacity student model. This paper shows, for the first time, that distillation is also an effective way to transfer knowledge between a clean model and its noisy
counterpart, with the novel approach of combining distillation with noise injection during training. We give some intuition for understanding this effect with the help of Section 4.2: a noisy neural network can be viewed as a model with reduced learning capacity by the loss of mutual information argument. Distillation is therefore acting to help reduce this capacity gap.
In our experiments, distillation shows great benefit in helping the network to converge to a good solution, even with a high level of noise injected in the forward propagation step. Here, we attempt to explain this effect by the reduced sensitivity of distillation loss. An influential work by Papernot et al. (2016) shows that distillation can be used to reduce the model sensitivity with respect to its input perturbations thus defending against some adversarial attacks. We argue that distillation can achieve a similar effect for the weights of the network. Taking the derivative of the i-th output of the student network qSi at temperature T with respect to a weight w yields
∂qSi ∂w = 1 T exp(zi/T )(∑ j exp(zj/T ) )2 ∑ j exp(zj/T ) ( ∂zi ∂w − ∂zj ∂w ) . (11)
The 1/T scaling makes the output less sensitive to weight perturbation at higher temperature, thus potentially stabilizing the training when noise is injected into weights during forward propagation. We plan to work on a more formal analysis of this argument in our future work.
Hardware Performance Benefits The improvements in noise tolerance of neural networks demonstrated in this work have a potential impact on the design of practical analog hardware accelerators for neural network inference. Increased robustness to noisy computation at the model training level potentially means that the specification of the analog hardware can be relaxed. In turn, this can make it easier to achieve the hardware specification, or even allow optimizations to further reduce the energy consumption. An in-depth discussion of the trade-off between compute noise performance and hardware energy dissipation is beyond the scope of this paper, but we refer the interested reader to Rekhi et al. (2019) for more details. In summary, we believe that machine learning research will be a key enabler for practical analog hardware accelerators.
7 CONCLUSION
Analog hardware holds the potential to significantly reduce the latency and energy consumption of neural network inference. However, analog hardware is imprecise and introduces noise during computation that limits accuracy in practice. This paper explored the training of noisy neural networks, which suffer from reduced capacity leading to accuracy loss. We propose a training methodology that trains neural networks via distillation and noise injection to increase the accuracy of models under noisy computation. Experimental results across a range of models and datasets, including ImageNet, demonstrate that this approach can almost double the network noise tolerance compared with the previous best reported values, without any changes to the model itself beyond the training method. With these improvements in the accuracy of noisy neural networks, we hope to enable the implementation of analog inference hardware in the near future. | 1. What is the main contribution of the paper regarding deep neural network inference on noisy hardware?
2. What are the strengths of the proposed method, particularly in combining noise injection and knowledge distillation?
3. What are the weaknesses of the paper, especially in terms of the noise model and experimental results?
4. How does the reviewer assess the novelty and practicality of the approach?
5. What are the suggestions for improving the paper, such as testing the model on other noise models and scaling the effect to different network architectures? | Review | Review
* Summary *
The article on "Noisy Machines" addresses the issue of implementing deep neural network inference on a noisy hardware computing substrate, e.g. analog accelerators. This is an important topic because analog devices allow fast and energy efficient inference, which is crucial for inference at the edge. Because of their analog nature such devices suffer from noisy computations, and in this article the case of noisy weights is studied.
The main contributions of this article are the following:
- an analysis of the performance loss in noisy networks by means of information theory and empirical results
- the idea of combining noise injection during training with knowledge distillation
- experimental evidence for a LeNet5 on MNIST, CIFAR10, and ResNet-50 on ImageNet
It has been shown in the literature that noise injection during training is an effective way to increase the noise robustness of neural networks. Relevant literature in this domain is cited. The novelty of the approach is to combine noise injection with distillation, by using the noise-free network as a teacher for the noisy network, which is initialized with the weights of the teacher. This is a novel variant of distillation and sounds like a simple to implement trick with beneficial results for increasing noise resiliency of networks. It is also proposed and shown that the method works for quantized networks.
The experimental results show that the combination of distillation and noise injection outperforms pure noise injection on all networks, as well as noisy inference without retraining. The effect is even more pronounced for quantized networks.
* Evaluation *
Overall I like this paper and think it is suitable to accept for ICLR, because it addresses an important practical problem of implementing deep networks on efficient hardware. The paper is well written and simple to understand and should be easy to implement (it would really help here providing code for the examples though). To the best of my knowledge I have not seen precisely this combination of noise injection and distillation, although there is a lot of literature about each individual approach. I appreciate that the authors made an effort to not just show empirical results but also motivate their findings by theory, although the argumentation stays a bit superficial.
What I am mainly missing are two points:
1. The assumed noise model of i.i.D. Gaussian weights is the simplest possible, and might deviate quite a bit in actual analog hardware. I would have liked to see a noise model that is derived from actual hardware observations, or maybe even a prototype implementation in hardware, such as was done e.g. in Binas et al. 2016. At the very least I would suggest to test the model on other noise models, including temporally changing noise levels, which could be a realistic scenario due to temperature fluctuations or other events.
2. The experimental results focus on MNIST, CIFAR10, and later briefly on ImageNet. While the results are quite convincing on MNIST and CIFAR, these are easier datasets with usually well separable classes, so the effect of noisy inference might not be as pronounced, as in datasets with more confusion even in the clean case. In the case of ImageNet (Table 1) it looks like the difference to pure noise injection is not as big as it was in the CIFAR case, but here also only lower noise levels were tested. I would recommend testing also the same noise range as for CIFAR to understand whether distillation always shows the desired benefits, or if this is a diminishing effect for larger networks. Overall it would help to understand how the effect scales with network depth, e.g. by comparing the information loss for different ResNet depths.
I'm giving weak accept and would change to accept if there could be clarification on how the approach scales to different network architectures and noise models closer to actual hardware. I also recommend publishing some example code for this approach. |
ICLR | Title
Noisy Machines: Understanding noisy neural networks and enhancing robustness to analog hardware errors using distillation
Abstract
The success of deep learning has brought forth a wave of interest in computer hardware design to better meet the high demands of neural network inference. In particular, analog computing hardware has been heavily motivated specifically for accelerating neural networks, based on either electronic, optical or photonic devices, which may well achieve lower power consumption than conventional digital electronics. However, these proposed analog accelerators suffer from the intrinsic noise generated by their physical components, which makes it challenging to achieve high accuracy on deep neural networks. Hence, for successful deployment on analog accelerators, it is essential to be able to train deep neural networks to be robust to random continuous noise in the network weights, which is a somewhat new challenge in machine learning. In this paper, we advance the understanding of noisy neural networks. We outline how a noisy neural network has reduced learning capacity as a result of loss of mutual information between its input and output. To combat this, we propose using knowledge distillation combined with noise injection during training to achieve more noise robust networks, which is demonstrated experimentally across different networks and datasets, including ImageNet. Our method achieves models with as much as ∼ 2× greater noise tolerance compared with the previous best attempts, which is a significant step towards making analog hardware practical for deep learning.
N/A
The success of deep learning has brought forth a wave of interest in computer hardware design to better meet the high demands of neural network inference. In particular, analog computing hardware has been heavily motivated specifically for accelerating neural networks, based on either electronic, optical or photonic devices, which may well achieve lower power consumption than conventional digital electronics. However, these proposed analog accelerators suffer from the intrinsic noise generated by their physical components, which makes it challenging to achieve high accuracy on deep neural networks. Hence, for successful deployment on analog accelerators, it is essential to be able to train deep neural networks to be robust to random continuous noise in the network weights, which is a somewhat new challenge in machine learning. In this paper, we advance the understanding of noisy neural networks. We outline how a noisy neural network has reduced learning capacity as a result of loss of mutual information between its input and output. To combat this, we propose using knowledge distillation combined with noise injection during training to achieve more noise robust networks, which is demonstrated experimentally across different networks and datasets, including ImageNet. Our method achieves models with as much as ∼ 2× greater noise tolerance compared with the previous best attempts, which is a significant step towards making analog hardware practical for deep learning.
1 INTRODUCTION
Deep neural networks (DNNs) have achieved unprecedented performance over a wide variety of tasks such as computer vision, speech recognition, and natural language processing. However, DNN inference is typically very demanding in terms of compute and memory resources. Consequently, larger models are often not well suited for large-scale deployment on edge devices, which typically have meagre performance and power budgets, especially battery powered mobile and IoT devices. To address these issues, the design of specialized hardware for DNN inference has drawn great interest, and is an extremely active area of research. To date, a plethora of techniques have been proposed for designing efficient neural network hardware (Sze et al., 2017).
In contrast to the current status quo of predominantly digital hardware, there is significant research interest in analog hardware for DNN inference. In this approach, digital values are represented by analog quantities such as electrical voltages or light pulses, and the computation itself (e.g., multiplication and addition) proceeds in the analog domain, before eventually being converted back to digital. Analog accelerators take advantage of particular efficiencies of analog computation in exchange for losing the bit-exact precision of digital. In other words, analog compute is cheap but somewhat imprecise. Analog computation has been demonstrated in the context of DNN inference in both electronic (Binas et al., 2016), photonic (Shen et al., 2017) and optical (Lin et al., 2018) systems. Analog accelerators promise to deliver at least two orders of magnitude better performance over a conventional digital processor for deep learning workloads in both speed (Shen et al., 2017) and energy efficiency (Ni et al., 2017). Electronic analog DNN accelerators are arguably the most mature technology and hence will be our focus in this work.
The most common approach to electronic analog DNN accelerator is in-memory computing, which typically uses non-volatile memory (NVM) crossbar arrays to encode the network weights as analog values. The NVM itself can be implemented with memristive devices, such as metal-oxide resistive random-access memory (ReRAM) (Hu et al., 2018) or phase-change memory (PCM) (Le Gallo et al., 2018; Boybat et al., 2018; Ambrogio et al., 2018). The matrix-vector operations computed during inference are then performed in parallel inside the crossbar array, operating on analog quantities for weights and activations. For example, addition of two quantities encoded as electrical currents can be achieved by simply connecting the two wires together, whereby the currents will add linearly according to Kirchhoff’s current law. In this case, there is almost zero latency or energy dissipation for this operation.
Similarly, multiplication with a weight can be achieved by programming the NVM cell conductance to the weight value, which is then used to convert an input activation encoded as a voltage into a scaled current, following Ohm’s law. Therefore, the analog approach promises significantly improved throughput and energy efficiency. However, the analog nature of the weights makes the compute noisy, which can limit inference accuracy. For example, a simple two-layer fully-connected network with a baseline accuracy of 91.7% on digital hardware, achieves only 76.7% when implemented on an analog photonic array (Shen et al., 2017). This kind of accuracy degradation is not acceptable for most deep learning applications. Therefore, the challenge of imprecise analog hardware motivates us to study and understand noisy neural networks, in order to maintain inference accuracy under noisy analog computation.
The question of how to effectively learn and compute with a noisy machine is a long-standing problem of interest in machine learning and computer science (Stevenson et al., 1990; Von Neumann, 1956). In this paper, we study noisy neural networks to understand their inference performance. We also demonstrate how to train a neural network with distillation and noise injection to make it more resilient to computation noise, enabling higher inference accuracy for models deployed on analog hardware. We present empirical results that demonstrate state-of-the-art noise tolerance on multiple datasets, including ImageNet.
The remainder of the paper is organized as follows. Section 2 gives an overview of related work. Section 3 outlines the problem statement. Section 4 presents a more formal analysis of noisy neural networks. Section 5 gives a distillation methodology for training noisy neural networks, with experimental results. Finally, Section 6 provides a brief discussion and Section 7 closes with concluding remarks.
2 RELATED WORK
Previous work broadly falls under the following categories: studying the effect of analog computation noise, analysis of noise-injection for DNNs, and use of distillation in model training.
Analog Computation Noise Models In Rekhi et al. (2019), the noise due to analog computation is modeled as additive parameter noise with zero-mean Gaussian distribution. The variance of this Gaussian is a function of the effective number of bits of the output of an analog computation. Similarly, the authors in Joshi et al. (2019) also model analog computation noise as additive Gaussian noise on the parameters, where the variance is proportional to the range of values that their PCM device can represent. Some noise models presented have included a more detailed account of device-level interactions, such as voltage drop across the analog array (Jain et al., 2018; Feinberg et al., 2018), but are beyond the scope of this paper. In this work, we consider an additive Gaussian noise model on the weights, similar to Rekhi et al. (2019); Joshi et al. (2019) and present a novel training method that outperforms the previous work in model noise resilience.
Noise Injection for Neural Networks Several stochastic regularization techniques based on noise-injection and dropout (Srivastava et al., 2014; Noh et al., 2017; Li & Liu, 2016) have been demonstrated to be highly effective at reducing overfitting. For generalized linear models, dropout and additive noise have been shown to be equivalent to adaptive L2 regularization to a first order (Wager et al., 2013). Training networks with Gaussian noise added to the weights or activations can also increase robustness to variety of adversarial attacks (Rakin et al., 2018). Bayesian neural networks replace deterministic weights with distributions in order to optimize over the posterior
distribution of the weights (Kingma & Welling, 2013). Many of these methods use noise injection at inference time to approximate weight distribution; in Gal & Ghahramani (2016) a link between Gaussian processes and dropout is established in an effort to model the uncertainty of the output of a network. A theoretical analysis by Stevenson et al. (1990) has shown that for neural networks with adaptive linear neurons, the probability of error of a noisy neural network classifier with weight noise increases with the number of layers, but largely independent of the number of weights per neuron or neurons per layer.
Distillation in Training Knowledge distillation (Hinton et al., 2015) is a well known technique in which the soft labels produced by a teacher model are used to train a student model which typically has reduced capacity. Distillation has shown merit for improving model performance across a range of scenarios, including student models lacking access to portions of training data (Micaelli & Storkey, 2019), quantized low-precision networks (Polino et al., 2018; Mishra & Marr, 2017), protection against adversarial attacks (Papernot et al., 2016; Goldblum et al., 2019), and in avoiding catastrophic forgetting for multi-task learning (Schwarz et al., 2018). To the best of our knowledge, our work is the first to combine distillation with noise injection in training to enhance model noise robustness.
3 PROBLEM STATEMENT
Without loss of generality, we model a general noisy machine after a simple memristive crossbar array, similar to Shafiee et al. (2016). Figure 1 illustrates how an arbitrary neural network layer, l, such as a typical 3× 3 convolution, can be mapped to this hardware substrate by first flattening the weights into a single large 2D matrix, Wl, and then programming each element of this matrix into a memristive cell in the crossbar array, which provides the required conductances Gl (the reciprocal of resistance) to perform analog multiplication following Ohm’s law, iout = vinG. Note that a pair of differential pair of NVM devices are typically used to represent a signed quantity in Gl. Subsequently, input activations, xl converted into continuous voltages, v(xl), are streamed into the array rows from the left-hand side. The memristive devices connect row with columns, where the row voltages are converted into currents scaled by the programmed conductance, G, to generate the currents i(yl), which are differential in order to represent both positive and negative quantites with unipolar signals. The currents from each memristive device essentially add up for free where they are connected in the columns, according to Kirchhoff’s current law. Finally, the differential currents are converted to bipolar voltages, v(yl), which are they digitized before adding bias, and performing batch normalization and ReLU operations, which are not shown in Figure 1.
However, the analog inference hardware of Figure 1 is subject to real-world non-idealities, typically attributed to variations in: 1) manufacturing process, 2) supply voltage and 3) temperature, PVT variation collectively, all of which result in noise in the system. Below we discuss the two key components in terms of analog noise modeling.
Data Converters. Digital-to-analog converter (DAC) and analog-to-digital converter (ADC) circuits are designed to be robust to PVT variation, but in practice these effects do degrade the resolution (i.e. number of bits). Therefore, we consider effective number of bits (ENOB), which is a lower bound on resolution in the presence of non-idealities. Hence, we use activation and weight quantization with ENOB data converters and no additional converter noise modeling.
NVM cells. Due to their analog nature, memristive NVM cells have limited precision, due to the read and write circuitry (Joshi et al., 2019). In between write and read operations, their stored value is prone to drift over time. Long-term drift can be corrected with periodic refresh operations. At shorter timescales, time-varying noise may be encountered. For most of the experiments in this paper, we model generic NVM cell noise as an additive zero-mean i.i.d. Gaussian error term on the weights of the model in each particular layer ∆Wl ∼ N (∆Wl; 0, σ2N,lI). This simple model, described more concretely in Section 5, is similar to that used by Joshi et al. (2019) which was verified on real hardware. In addition, we also investigate spatially-varying and time-varying noise models in Section 5.2 (Table 1).
4 ANALYSIS OF NOISY NEURAL NETWORKS
4.1 BIAS VARIANCE DECOMPOSITION FOR NOISY WEIGHTS
Naively deploying an off-the-shelf pretrained model on a noisy accelerator will yield poor accuracy for a fundamental reason. Consider a neural network f(W;x) with weights W that maps an input x ∈ Rn to an output y ∈ R. In the framework of statistical learning, x and y are considered to be randomly distributed following a joint probability distribution p(x, y). In a noisy neural network, the weights W are also randomly distributed, with distribution p(W). The expected Mean Squared Error (MSE) of this noisy neural network can be decomposed as
E(x,y)∼p(x,y),W∼p(W)[(f(W;x)− y)2] =E(x,y)∼p(x,y),W∼p(W)[(f(W;x)− EW∼p(W)[f(W;x)] + EW∼p(W)[f(W;x)]− y)2] =Ex∼p(x)[EW∼p(W)[(f(W;x)− EW∼p(W)[f(W;x)])2]]
+ E(x,y)∼p(x,y)[(EW∼p(W)[f(W;x)]− y)2]. (1)
The first term on the right hand side of Equation 1 is a variance loss term due to randomness in the weights and is denoted as lvar. The second term is a squared bias loss term which we call lbias. However, typically a model is trained to minimize the empirical version of expected loss lpretrained = E(x,y)∼p(x,y)[(f(E[W];x) − y)2]. We assume that the noise is centered such that pretrained weights are equal to E[W]. A pretrained model is therefore optimized for the wrong loss function when deployed on a noisy accelerator. To show this in a more concrete way, a baseline LeNet model (32 filters in the first convolutional layer, 64 filters in the second convolutional layer and 1024 neurons in the fully-connected layer) (LeCun et al., 1998) is trained on MNIST dataset to 99.19% accuracy and then exposed to Gaussian noise in its weights, numerical values of these loss terms can be estimated. The expected value of the network output EW[f(W;x)] is estimated by averaging over outputs of different instances of the network for the same input x. We perform inference on n = 100 different instances of the network and estimate the loss terms as
f(W;x) = EW∼p(W)[f(W;x)] ' 1
n n∑ i=1 f(Wi;x), (2)
l̂var = 1
N N∑ j=1 1 n n∑ i=1 (f(Wi;xj)− f(W;xj))2, (3)
l̂bias = 1
N N∑ j=1 (f(W;xj)− yj)2, (4)
l̂pretrained = 1
N N∑ j=1 (f(E[W];xj)− yj)2. (5)
The above formulas are for a network with a scalar output. They can be easily extended to the vector output case by averaging over all outputs. In the LeNet example, we take the output of softmax layer to calculate squared losses. The noise is assumed i.i.d. Gaussian centered around zero with a fixed SNR σ2W,l/σ 2 N,l in each layer l. The numerical values of the above losses are estimated using
the entire test dataset for different noise levels. Results are shown in Figure 2(a). l̂bias is initially equal to l̂pretrained and l̂var = 0 when there is no noise. However, as noise level rises, they increase in magnitude and become much more important than l̂pretrained. l̂var overtakes l̂bias to become the predominant loss term in a noisy LeNet at σN/σW ' 0.6. It is useful to note that lbias increases with noise entirely due to nonlinearity in the network, which is ReLU in the case of LeNet. In a linear model, lbias should be equal to lpretrained as we would have f(E[W];x) = E[f(W;x)]. A model trained in a conventional manner is thus not optimized for the real loss it is going to encounter on a noisy accelerator. Special retraining is required to improve its noise tolerance. In Figure 2(a), we show how the model accuracy degrades with a rising noise level for the baseline LeNet and its deeper and wider variants. The deeper network is obtained by stacking two more convolutional layers of width 16 in front of the baseline network and the wider network is obtained by increasing the widths of each layer in the baseline to 128, 256, 2048 respectively. Performance degradation due to noise is worse for the deeper variant and less severe for the wider one. A more detailed discussion of the network architecture effect on its performance under noise is offered in Section 4.2
4.2 LOSS OF INFORMATION IN A NOISY NEURAL NETWORK
Information theory offers useful tools to study noise in neural networks. Mutual information I(X;Y ) characterizes the amount of information obtained on random variable X by observing another random variable Y . The mutual information between X and Y can be related to Shannon entropy by I(X;Y ) = H(Y )−H(Y |X). (6) Mutual information has been used to understand DNNs (Tishby & Zaslavsky, 2015; Saxe et al., 2018). Treating a noisy neural network as a noisy information channel, we can show how information about the input to the neural network diminishes as it propagates through the noisy computation. In this subsection, X is the input to the neural network and Y is the output. Mutual information is estimated for the baseline LeNet model and its variants using Equation 6. When there is no noise, the term H(Y |X) is zero as Y is deterministic once the input to the network X is known, therefore I(X;Y ) is just H(Y ) in this case. Shannon entropy H(Y ) can be estimated using a standard discrete binning approach (Saxe et al., 2018). In our experiment, Y is the output of the softmax layer
which is a vector of length 10. Entropy H(Y ) is estimated using four bins per coordinate of Y by
Ĥ(Y ) = − N∑ i=1 pi log(pi), (7)
where pi is the probability that an output falls in the bin i. When noise is introduced to the weights, the conditional entropy H(Y |X) is estimated by fixing the input X = x and performing multiple noisy inferences to calculate Ĥ(Y |X = x) with the above binning approach. Ĥ(Y |X = x) is then averaged over different input x to obtain Ĥ(Y |X). This estimate is performed for LeNet and its variants with different noise levels. Results are shown in Figure 2(b). The values are normalized to the estimate of I(X;Y ) at zero noise. Mutual information between the input and the output decays towards zero with increasing noise in network weights. Furthermore, mutual information in a deeper and narrower network decays faster than in a shallower and wider network. Intuitively, information from the input undergoes more noisy compute when more layers are added to the network, while a wider network has more redundant paths for the information to flow, thus better preserving it. An information theoretic bound of mutual information decay as a function of network depth and width in a noisy neural network will be treated in our follow-up work. Overall, noise is damaging the learning capacity of the network. When the output of the model contains no information from its input, the network loses all ability to learn. For a noise level that is not so extreme, a significant amount of mutual information remains, which indicates that useful learning is possible even with a noisy model.
5 COMBINING NOISE INJECTION AND KNOWLEDGE DISTILLATION
5.1 METHODOLOGY
Noise injection during training is one way of exposing network training to a more realistic loss as randomly perturbing weights simulates what happens in a real noisy analog device, and forces the network to adapt to noise during training. Noise injection only happens in training during forward propagation, which can be considered as an approximation for calculating weight gradients with a straight-through-estimator (STE) (Bengio et al., 2013). At each forward pass, the weight Wl of layer l is drawn from an i.i.d. Gaussian distribution N (Wl;Wl0, σ2N,lI). The noise is referenced to the range of representable weights W lmax −W lmin in that particular layer
σN,l = η(W l max −W lmin), (8)
where η is a coefficient characterizing the noise level. During back propagation, gradients are calculated with clean weights Wl0, and only W l 0 gets updated by applying the gradient. W l max and W l min are hyperparameters which can be chosen with information on the weight distributions.
Knowledge distillation was introduced by Hinton et al. (2015) as a way for training a smaller student model using a larger model as the teacher. For an input to the neural network x, the teacher model generates logits zTi , which are then turned into a probability vector by the softmax layer
qTi = σ(z T i ;T ) = exp(zTi /T )∑ j exp(z T j /T ) . (9)
The temperature, T , controls the softness of the probabilities. The teacher network can generate softer labels for the student network by raising the temperature T . We propose to use a noise free clean model as the teacher to train a noisy student network. The student network is trained with noise injection to match a mix of hard targets and soft targets generated by the teacher. Logits generated by the student network are denoted as zSi . A loss function with distillation for the student model can be written as
L(x;WS;T ) = H(σ(zSi ;T = 1), ytrue) + αT 2H(σ(zSi ;T ), qTi ) +R(WS0). (10)
Here H is cross-entropy loss, ytrue is the one-hot encoding of the ground truth, and R is the L2regularization term. Parameter α balances relative strength between hard and soft targets. We follow the original implementation in Hinton et al. (2015), which includes a T 2 factor in front of the soft target loss to balance gradients generated from different targets. The student model is then trained
with Gaussian noise injection using this distillation loss function. The vanilla noise injection training corresponds to the case where α = 0. If the range of weights is not constrained and the noise reference is fixed, the network soon learns that the most effective way to decrease the loss is to increase the amplitude of the weights, which increases the effective SNR. There are two possible ways to deal with this problem. Firstly, the noise reference could be re-calculated after each weight update, thus updating the noise power. Secondly, we can constrain the range of weights by clipping them to the range [W lmin,W l max], and use a fixed noise model during training. We found that in general the second method of fixing the range of weights and training for a specific noise yields more stable training and better results. Therefore, this is the training method that we adopt in this paper. A schematic of our proposed method is shown in Figure 5 of the Appendix.
During training, a clean model is first trained to its full accuracy and then weight clipping is applied to clip weights in the range [W lmin,W l max]. The specific range is chosen based on statistics of the weights. Fine-tuning is then applied to bring the weight-clipped clean model back to full accuracy. This model is then used as the teacher to generate soft targets. The noisy student network is initialized with the same weights as the teacher. This can be considered as a warm start to accelerate retraining. As we discussed earlier, the range of weights is fixed during training, and the noise injected into the student model is referenced to this range.
Our method also supports training for low precision noisy models. Quantization reflects finite precision conversion between analog and digital domains in an analog accelerator. Weights are uniformly quantized in the range [W lmin,W l max] before being exposed to noise. In a given layer, the input activations are quantized before being multiplied by noisy weights. The output results of the matrix multiplication are also quantized before adding biases and performing batch normalization, which are considered to happen in digital domain. When training with quantization, the straight-throughestimator is assumed when calculating gradients with back propagation.
5.2 EXPERIMENTAL RESULTS
In order to establish the effectiveness of our proposed method, experiments are performed for different networks and datasets. In this section we mainly focus on bigger datasets and models, while results on LeNet and its variants with some discussion of network architecture effect can be found in Figure 6 of the Appendix. ResNets are a family of convolutional neural networks proposed by He et al. (2016), which have gained great popularity in computer vision applications. In fact, many other deep neural networks also use ResNet-like cells as their building blocks. ResNets are often used as industry standard benchmark models to test hardware performance. The first set of experiments we present consist of a ResNet-32 model trained on the CIFAR10 dataset. In order to compare fairly with the previous work, we follow the implementation in Joshi et al. (2019), and consider a ResNet32(v1) model on CIFAR10 with weight clipping in the range [−2σW,l, 2σW,l]. The teacher model is trained to an accuracy of 93.845% using stochastic gradient descent with cosine learning rate decay (Loshchilov & Hutter, 2016), and an initial learning rate of 0.1 (batch size is 128). The network is then retrained with noise injection to make it robust against noise. Retraining takes place for 150 epochs, the initial learning rate is 0.01 and decays with the same cosine profile. We performed two sets of retraining, one without distillation in the loss (α = 0), and another with distillation loss (α = 1). Everything else was kept equal in these retraining runs. Five different noise levels are tested with five different values of η: {0.02, 0.04, 0.057, 0.073, 0.11}. Results are shown in Figure 3(a). Every retraining run was performed twice and inference was performed 50 times on the test dataset for one model, to generate statistically significant results. Temperature was set to T = 6 for the runs with distillation. We found that an intermediate temperature between 2 and 10 produces better results. The pretrained model without any retraining performs very poorly at inference time when noise is present. Retraining with Gaussian noise injection can effectively recover some accuracy, which we confirm as reported in Joshi et al. (2019). Our method of combining noise injection with knowledge distillation from the clean model further improves noise resilience by about 40% in terms of η, which is an improvement of almost 2× in terms of noise power σ2N .
The actual noise level in a given device can only be estimated, and will vary from one device to another and even fluctuate depending on the physical environment in which it operates (Section 3). Therefore, it is important that any method to enhance noise robustness can tolerate a range of noise
levels. Our method offers improved noise robustness, even when the actual noise at inference time is different from that injected at training time. It is shown in Figure 3(b) that the model obtained from distillation is more accurate and less sensitive to noise level differences between training and inference time. This holds for a range of different inference noise levels around the training level. In the previous experiments, we assume a fixed noise level parameterized by η. On real analog hardware, there could be additional non-idealities such as variation in noise level due to temperature fluctuation and nonuniform noise profile on different NVM cells due to statistical variation in the manufacturing process. We have conducted additional experiments to account for these effects.
Results from the experiments are shown in Table 1. Temporal fluctuation represents noise level variation over time. Noise η is randomly sampled from N (η; η0, σ2η) for each inference batch. A noise temporal fluctuation level of 10% means that ση = 0.1η0. Spatial noise level fluctuation introduces nonuniform diagonal terms in the noise covariance matrix. More concretely, each weight noise in our previous model is multiplied by a scale factor λw with λw drawn from a Gaussian distribution N (λw; 1, σ2λ). A noise spatial fluctuation level of 10% means that σλ = 0.1. The scale factors are generated and then fixed when the network is instantiated, therefore the noise during network inference is non i.i.d. in this case. Results from our experiments show that there is no significant deviation when a combination of these non-ideal noise effects are taken into account.
The performance of our training method is also validated with quantization. A ResNet-18(v2) model is trained with quantization to 4-bit precision (ENOB) for both weights and activations. This corresponds to 4-bit precision conversions between digital and analog domains. A subset of training
data is passed through the full precision model to calibrate the range for quantization – we choose the 0.1% and 99.9% percentiles as qmin and qmax for the quantizer. This range of quantization is fixed throughout training. The quantized model achieves an accuracy of 92.91% on the test dataset when no noise is present. The model is then re-trained for noise robustness. The noise level is referenced to the range of quantization of weights in one particular layer, such that W lmin = qmin,l and W lmax = qmax,l. Results are shown for the same set of η values in Figure 4(a). In the distillation retraining runs, the full-precision clean model with an accuracy of 93.87% is used as the teacher and temperature is set to T = 6. Due to extra loss in precision imposed by aggressive quantization, accuracy of the pretrained quantized model drops sharply with noise. At η = 0.057, the model accuracy drops to 87.5% without retraining and further down to 80.9% at η = 0.073. Even retraining with noise injection struggles, and the model retrained with only noise injection achieves an accuracy of 90.34% at η = 0.073. Our method of combining noise injection and distillation stands out by keeping the accuracy loss within 1% from the baseline up to a noise level of η ' 0.07.
One interesting aspect of using distillation loss during retraining with noise can be seen in Figure 4(b). The evolution of model accuracy on the test dataset is shown. When no distillation loss is used, the model suffers an accuracy drop (difference between blue and orange curves) around 2.08% when tested with noise. The drop (difference between green and red curves) is significantly reduced to around 0.6% when distillation loss is used. This observation indicates that training with distillation favors solutions that are less sensitive to noise. The final model obtained with distillation is actually slightly worse when there is no noise at inference time but becomes superior when noise is present.
Results on the ImageNet dataset for a ResNet-50(v1) network are shown in Table 2 to demonstrate that our proposed approach scales to a large-scale dataset and a deep model. A ResNet-50 model is first trained to an accuracy of 74.942% with weight clipping in the range [−2σW,l, 2σW,l]. This range is fixed as the reference for added noise. For ResNet-50 on ImageNet, only three different noise levels are explored, and the accuracy degrades very quickly beyond the noise level η = 0.06, as the model and the task are considerably more complex. Retraining runs for 30 epochs with an initial learning rate of 0.001 and cosine learning rate decay with a batch size of 32. For distillation, we used α = 1 and T = 6 as in previous experiments. Results are collected for two independent training runs in each setting and 50 inference runs over the entire test dataset. The findings confirm that training with distillation and noise injection consistently delivers more noise robust models. The accuracy uplift benefit also markedly increases with noise.
6 DISCUSSION
Effects of distillation Knowledge distillation is a proven technique to transfer knowledge from a larger teacher model to a smaller, lower capacity student model. This paper shows, for the first time, that distillation is also an effective way to transfer knowledge between a clean model and its noisy
counterpart, with the novel approach of combining distillation with noise injection during training. We give some intuition for understanding this effect with the help of Section 4.2: a noisy neural network can be viewed as a model with reduced learning capacity by the loss of mutual information argument. Distillation is therefore acting to help reduce this capacity gap.
In our experiments, distillation shows great benefit in helping the network to converge to a good solution, even with a high level of noise injected in the forward propagation step. Here, we attempt to explain this effect by the reduced sensitivity of distillation loss. An influential work by Papernot et al. (2016) shows that distillation can be used to reduce the model sensitivity with respect to its input perturbations thus defending against some adversarial attacks. We argue that distillation can achieve a similar effect for the weights of the network. Taking the derivative of the i-th output of the student network qSi at temperature T with respect to a weight w yields
∂qSi ∂w = 1 T exp(zi/T )(∑ j exp(zj/T ) )2 ∑ j exp(zj/T ) ( ∂zi ∂w − ∂zj ∂w ) . (11)
The 1/T scaling makes the output less sensitive to weight perturbation at higher temperature, thus potentially stabilizing the training when noise is injected into weights during forward propagation. We plan to work on a more formal analysis of this argument in our future work.
Hardware Performance Benefits The improvements in noise tolerance of neural networks demonstrated in this work have a potential impact on the design of practical analog hardware accelerators for neural network inference. Increased robustness to noisy computation at the model training level potentially means that the specification of the analog hardware can be relaxed. In turn, this can make it easier to achieve the hardware specification, or even allow optimizations to further reduce the energy consumption. An in-depth discussion of the trade-off between compute noise performance and hardware energy dissipation is beyond the scope of this paper, but we refer the interested reader to Rekhi et al. (2019) for more details. In summary, we believe that machine learning research will be a key enabler for practical analog hardware accelerators.
7 CONCLUSION
Analog hardware holds the potential to significantly reduce the latency and energy consumption of neural network inference. However, analog hardware is imprecise and introduces noise during computation that limits accuracy in practice. This paper explored the training of noisy neural networks, which suffer from reduced capacity leading to accuracy loss. We propose a training methodology that trains neural networks via distillation and noise injection to increase the accuracy of models under noisy computation. Experimental results across a range of models and datasets, including ImageNet, demonstrate that this approach can almost double the network noise tolerance compared with the previous best reported values, without any changes to the model itself beyond the training method. With these improvements in the accuracy of noisy neural networks, we hope to enable the implementation of analog inference hardware in the near future. | 1. What are the main contributions and findings of the paper regarding inherent noise in analog neural networks?
2. What are the limitations of the noise model used in the study, and how might correlations in the injected noise affect the results?
3. How does the paper evaluate the impact of noise on the accuracy of analog neural networks, and what are the conclusions drawn from these evaluations?
4. Are there any aspects of the paper's methodology or analysis that could be improved upon, such as considering different types of noise models or exploring additional topics like scaling behaviors in analog RNNs?
5. In what ways might the paper's findings have practical significance for improving neural network-based inference with low power consumption and latency? | Review | Review
The authors of the manuscript study how inherent noise in the analog neural networks affects its accuracy. This is a very important topic as neural network based inference becomes uobiqutous and is required to run with very low power consumption and latency.
The manuscipt considers a system where the values of the neural network weights and biases are experiencing i.i.d Gaussian noise, which is a pretty good assumptions. However, in heavy use the system may warm up, and then there could be an effect that is correlated accross different weights. The noise model used would not be able to ensure proper inference in these conditions. I would like to see a discussion on the effect of correlations in the injected noise.
The mutual information is considered and evaluated for a the "noisy" and "clean" versions and the result is according to expectations. The some degree, I do not see this part very valuable, as it does not bring any particular insights on the analog neural network operation. Rather I woudl like to see how, the analog performance under noise scales as the neural has more layers. Also, the noise behavior of an analog RNN woudl be very interesting.
The authors have detected that even, if the neural network trainded without noise is not robust when the weigths fluctuate, the trained network is a good starting point for transfer learning. To some degree I do not find this to be a very inventive step as transfer learning has shown to able to cross much larger training data set alterations.
Good solid work, but lacking non-obvious results, and I do not see manuscript adressing the the harder challenges. However, the quantified results may have a notable practical importance. |
ICLR | Title
Noisy Machines: Understanding noisy neural networks and enhancing robustness to analog hardware errors using distillation
Abstract
The success of deep learning has brought forth a wave of interest in computer hardware design to better meet the high demands of neural network inference. In particular, analog computing hardware has been heavily motivated specifically for accelerating neural networks, based on either electronic, optical or photonic devices, which may well achieve lower power consumption than conventional digital electronics. However, these proposed analog accelerators suffer from the intrinsic noise generated by their physical components, which makes it challenging to achieve high accuracy on deep neural networks. Hence, for successful deployment on analog accelerators, it is essential to be able to train deep neural networks to be robust to random continuous noise in the network weights, which is a somewhat new challenge in machine learning. In this paper, we advance the understanding of noisy neural networks. We outline how a noisy neural network has reduced learning capacity as a result of loss of mutual information between its input and output. To combat this, we propose using knowledge distillation combined with noise injection during training to achieve more noise robust networks, which is demonstrated experimentally across different networks and datasets, including ImageNet. Our method achieves models with as much as ∼ 2× greater noise tolerance compared with the previous best attempts, which is a significant step towards making analog hardware practical for deep learning.
N/A
The success of deep learning has brought forth a wave of interest in computer hardware design to better meet the high demands of neural network inference. In particular, analog computing hardware has been heavily motivated specifically for accelerating neural networks, based on either electronic, optical or photonic devices, which may well achieve lower power consumption than conventional digital electronics. However, these proposed analog accelerators suffer from the intrinsic noise generated by their physical components, which makes it challenging to achieve high accuracy on deep neural networks. Hence, for successful deployment on analog accelerators, it is essential to be able to train deep neural networks to be robust to random continuous noise in the network weights, which is a somewhat new challenge in machine learning. In this paper, we advance the understanding of noisy neural networks. We outline how a noisy neural network has reduced learning capacity as a result of loss of mutual information between its input and output. To combat this, we propose using knowledge distillation combined with noise injection during training to achieve more noise robust networks, which is demonstrated experimentally across different networks and datasets, including ImageNet. Our method achieves models with as much as ∼ 2× greater noise tolerance compared with the previous best attempts, which is a significant step towards making analog hardware practical for deep learning.
1 INTRODUCTION
Deep neural networks (DNNs) have achieved unprecedented performance over a wide variety of tasks such as computer vision, speech recognition, and natural language processing. However, DNN inference is typically very demanding in terms of compute and memory resources. Consequently, larger models are often not well suited for large-scale deployment on edge devices, which typically have meagre performance and power budgets, especially battery powered mobile and IoT devices. To address these issues, the design of specialized hardware for DNN inference has drawn great interest, and is an extremely active area of research. To date, a plethora of techniques have been proposed for designing efficient neural network hardware (Sze et al., 2017).
In contrast to the current status quo of predominantly digital hardware, there is significant research interest in analog hardware for DNN inference. In this approach, digital values are represented by analog quantities such as electrical voltages or light pulses, and the computation itself (e.g., multiplication and addition) proceeds in the analog domain, before eventually being converted back to digital. Analog accelerators take advantage of particular efficiencies of analog computation in exchange for losing the bit-exact precision of digital. In other words, analog compute is cheap but somewhat imprecise. Analog computation has been demonstrated in the context of DNN inference in both electronic (Binas et al., 2016), photonic (Shen et al., 2017) and optical (Lin et al., 2018) systems. Analog accelerators promise to deliver at least two orders of magnitude better performance over a conventional digital processor for deep learning workloads in both speed (Shen et al., 2017) and energy efficiency (Ni et al., 2017). Electronic analog DNN accelerators are arguably the most mature technology and hence will be our focus in this work.
The most common approach to electronic analog DNN accelerator is in-memory computing, which typically uses non-volatile memory (NVM) crossbar arrays to encode the network weights as analog values. The NVM itself can be implemented with memristive devices, such as metal-oxide resistive random-access memory (ReRAM) (Hu et al., 2018) or phase-change memory (PCM) (Le Gallo et al., 2018; Boybat et al., 2018; Ambrogio et al., 2018). The matrix-vector operations computed during inference are then performed in parallel inside the crossbar array, operating on analog quantities for weights and activations. For example, addition of two quantities encoded as electrical currents can be achieved by simply connecting the two wires together, whereby the currents will add linearly according to Kirchhoff’s current law. In this case, there is almost zero latency or energy dissipation for this operation.
Similarly, multiplication with a weight can be achieved by programming the NVM cell conductance to the weight value, which is then used to convert an input activation encoded as a voltage into a scaled current, following Ohm’s law. Therefore, the analog approach promises significantly improved throughput and energy efficiency. However, the analog nature of the weights makes the compute noisy, which can limit inference accuracy. For example, a simple two-layer fully-connected network with a baseline accuracy of 91.7% on digital hardware, achieves only 76.7% when implemented on an analog photonic array (Shen et al., 2017). This kind of accuracy degradation is not acceptable for most deep learning applications. Therefore, the challenge of imprecise analog hardware motivates us to study and understand noisy neural networks, in order to maintain inference accuracy under noisy analog computation.
The question of how to effectively learn and compute with a noisy machine is a long-standing problem of interest in machine learning and computer science (Stevenson et al., 1990; Von Neumann, 1956). In this paper, we study noisy neural networks to understand their inference performance. We also demonstrate how to train a neural network with distillation and noise injection to make it more resilient to computation noise, enabling higher inference accuracy for models deployed on analog hardware. We present empirical results that demonstrate state-of-the-art noise tolerance on multiple datasets, including ImageNet.
The remainder of the paper is organized as follows. Section 2 gives an overview of related work. Section 3 outlines the problem statement. Section 4 presents a more formal analysis of noisy neural networks. Section 5 gives a distillation methodology for training noisy neural networks, with experimental results. Finally, Section 6 provides a brief discussion and Section 7 closes with concluding remarks.
2 RELATED WORK
Previous work broadly falls under the following categories: studying the effect of analog computation noise, analysis of noise-injection for DNNs, and use of distillation in model training.
Analog Computation Noise Models In Rekhi et al. (2019), the noise due to analog computation is modeled as additive parameter noise with zero-mean Gaussian distribution. The variance of this Gaussian is a function of the effective number of bits of the output of an analog computation. Similarly, the authors in Joshi et al. (2019) also model analog computation noise as additive Gaussian noise on the parameters, where the variance is proportional to the range of values that their PCM device can represent. Some noise models presented have included a more detailed account of device-level interactions, such as voltage drop across the analog array (Jain et al., 2018; Feinberg et al., 2018), but are beyond the scope of this paper. In this work, we consider an additive Gaussian noise model on the weights, similar to Rekhi et al. (2019); Joshi et al. (2019) and present a novel training method that outperforms the previous work in model noise resilience.
Noise Injection for Neural Networks Several stochastic regularization techniques based on noise-injection and dropout (Srivastava et al., 2014; Noh et al., 2017; Li & Liu, 2016) have been demonstrated to be highly effective at reducing overfitting. For generalized linear models, dropout and additive noise have been shown to be equivalent to adaptive L2 regularization to a first order (Wager et al., 2013). Training networks with Gaussian noise added to the weights or activations can also increase robustness to variety of adversarial attacks (Rakin et al., 2018). Bayesian neural networks replace deterministic weights with distributions in order to optimize over the posterior
distribution of the weights (Kingma & Welling, 2013). Many of these methods use noise injection at inference time to approximate weight distribution; in Gal & Ghahramani (2016) a link between Gaussian processes and dropout is established in an effort to model the uncertainty of the output of a network. A theoretical analysis by Stevenson et al. (1990) has shown that for neural networks with adaptive linear neurons, the probability of error of a noisy neural network classifier with weight noise increases with the number of layers, but largely independent of the number of weights per neuron or neurons per layer.
Distillation in Training Knowledge distillation (Hinton et al., 2015) is a well known technique in which the soft labels produced by a teacher model are used to train a student model which typically has reduced capacity. Distillation has shown merit for improving model performance across a range of scenarios, including student models lacking access to portions of training data (Micaelli & Storkey, 2019), quantized low-precision networks (Polino et al., 2018; Mishra & Marr, 2017), protection against adversarial attacks (Papernot et al., 2016; Goldblum et al., 2019), and in avoiding catastrophic forgetting for multi-task learning (Schwarz et al., 2018). To the best of our knowledge, our work is the first to combine distillation with noise injection in training to enhance model noise robustness.
3 PROBLEM STATEMENT
Without loss of generality, we model a general noisy machine after a simple memristive crossbar array, similar to Shafiee et al. (2016). Figure 1 illustrates how an arbitrary neural network layer, l, such as a typical 3× 3 convolution, can be mapped to this hardware substrate by first flattening the weights into a single large 2D matrix, Wl, and then programming each element of this matrix into a memristive cell in the crossbar array, which provides the required conductances Gl (the reciprocal of resistance) to perform analog multiplication following Ohm’s law, iout = vinG. Note that a pair of differential pair of NVM devices are typically used to represent a signed quantity in Gl. Subsequently, input activations, xl converted into continuous voltages, v(xl), are streamed into the array rows from the left-hand side. The memristive devices connect row with columns, where the row voltages are converted into currents scaled by the programmed conductance, G, to generate the currents i(yl), which are differential in order to represent both positive and negative quantites with unipolar signals. The currents from each memristive device essentially add up for free where they are connected in the columns, according to Kirchhoff’s current law. Finally, the differential currents are converted to bipolar voltages, v(yl), which are they digitized before adding bias, and performing batch normalization and ReLU operations, which are not shown in Figure 1.
However, the analog inference hardware of Figure 1 is subject to real-world non-idealities, typically attributed to variations in: 1) manufacturing process, 2) supply voltage and 3) temperature, PVT variation collectively, all of which result in noise in the system. Below we discuss the two key components in terms of analog noise modeling.
Data Converters. Digital-to-analog converter (DAC) and analog-to-digital converter (ADC) circuits are designed to be robust to PVT variation, but in practice these effects do degrade the resolution (i.e. number of bits). Therefore, we consider effective number of bits (ENOB), which is a lower bound on resolution in the presence of non-idealities. Hence, we use activation and weight quantization with ENOB data converters and no additional converter noise modeling.
NVM cells. Due to their analog nature, memristive NVM cells have limited precision, due to the read and write circuitry (Joshi et al., 2019). In between write and read operations, their stored value is prone to drift over time. Long-term drift can be corrected with periodic refresh operations. At shorter timescales, time-varying noise may be encountered. For most of the experiments in this paper, we model generic NVM cell noise as an additive zero-mean i.i.d. Gaussian error term on the weights of the model in each particular layer ∆Wl ∼ N (∆Wl; 0, σ2N,lI). This simple model, described more concretely in Section 5, is similar to that used by Joshi et al. (2019) which was verified on real hardware. In addition, we also investigate spatially-varying and time-varying noise models in Section 5.2 (Table 1).
4 ANALYSIS OF NOISY NEURAL NETWORKS
4.1 BIAS VARIANCE DECOMPOSITION FOR NOISY WEIGHTS
Naively deploying an off-the-shelf pretrained model on a noisy accelerator will yield poor accuracy for a fundamental reason. Consider a neural network f(W;x) with weights W that maps an input x ∈ Rn to an output y ∈ R. In the framework of statistical learning, x and y are considered to be randomly distributed following a joint probability distribution p(x, y). In a noisy neural network, the weights W are also randomly distributed, with distribution p(W). The expected Mean Squared Error (MSE) of this noisy neural network can be decomposed as
E(x,y)∼p(x,y),W∼p(W)[(f(W;x)− y)2] =E(x,y)∼p(x,y),W∼p(W)[(f(W;x)− EW∼p(W)[f(W;x)] + EW∼p(W)[f(W;x)]− y)2] =Ex∼p(x)[EW∼p(W)[(f(W;x)− EW∼p(W)[f(W;x)])2]]
+ E(x,y)∼p(x,y)[(EW∼p(W)[f(W;x)]− y)2]. (1)
The first term on the right hand side of Equation 1 is a variance loss term due to randomness in the weights and is denoted as lvar. The second term is a squared bias loss term which we call lbias. However, typically a model is trained to minimize the empirical version of expected loss lpretrained = E(x,y)∼p(x,y)[(f(E[W];x) − y)2]. We assume that the noise is centered such that pretrained weights are equal to E[W]. A pretrained model is therefore optimized for the wrong loss function when deployed on a noisy accelerator. To show this in a more concrete way, a baseline LeNet model (32 filters in the first convolutional layer, 64 filters in the second convolutional layer and 1024 neurons in the fully-connected layer) (LeCun et al., 1998) is trained on MNIST dataset to 99.19% accuracy and then exposed to Gaussian noise in its weights, numerical values of these loss terms can be estimated. The expected value of the network output EW[f(W;x)] is estimated by averaging over outputs of different instances of the network for the same input x. We perform inference on n = 100 different instances of the network and estimate the loss terms as
f(W;x) = EW∼p(W)[f(W;x)] ' 1
n n∑ i=1 f(Wi;x), (2)
l̂var = 1
N N∑ j=1 1 n n∑ i=1 (f(Wi;xj)− f(W;xj))2, (3)
l̂bias = 1
N N∑ j=1 (f(W;xj)− yj)2, (4)
l̂pretrained = 1
N N∑ j=1 (f(E[W];xj)− yj)2. (5)
The above formulas are for a network with a scalar output. They can be easily extended to the vector output case by averaging over all outputs. In the LeNet example, we take the output of softmax layer to calculate squared losses. The noise is assumed i.i.d. Gaussian centered around zero with a fixed SNR σ2W,l/σ 2 N,l in each layer l. The numerical values of the above losses are estimated using
the entire test dataset for different noise levels. Results are shown in Figure 2(a). l̂bias is initially equal to l̂pretrained and l̂var = 0 when there is no noise. However, as noise level rises, they increase in magnitude and become much more important than l̂pretrained. l̂var overtakes l̂bias to become the predominant loss term in a noisy LeNet at σN/σW ' 0.6. It is useful to note that lbias increases with noise entirely due to nonlinearity in the network, which is ReLU in the case of LeNet. In a linear model, lbias should be equal to lpretrained as we would have f(E[W];x) = E[f(W;x)]. A model trained in a conventional manner is thus not optimized for the real loss it is going to encounter on a noisy accelerator. Special retraining is required to improve its noise tolerance. In Figure 2(a), we show how the model accuracy degrades with a rising noise level for the baseline LeNet and its deeper and wider variants. The deeper network is obtained by stacking two more convolutional layers of width 16 in front of the baseline network and the wider network is obtained by increasing the widths of each layer in the baseline to 128, 256, 2048 respectively. Performance degradation due to noise is worse for the deeper variant and less severe for the wider one. A more detailed discussion of the network architecture effect on its performance under noise is offered in Section 4.2
4.2 LOSS OF INFORMATION IN A NOISY NEURAL NETWORK
Information theory offers useful tools to study noise in neural networks. Mutual information I(X;Y ) characterizes the amount of information obtained on random variable X by observing another random variable Y . The mutual information between X and Y can be related to Shannon entropy by I(X;Y ) = H(Y )−H(Y |X). (6) Mutual information has been used to understand DNNs (Tishby & Zaslavsky, 2015; Saxe et al., 2018). Treating a noisy neural network as a noisy information channel, we can show how information about the input to the neural network diminishes as it propagates through the noisy computation. In this subsection, X is the input to the neural network and Y is the output. Mutual information is estimated for the baseline LeNet model and its variants using Equation 6. When there is no noise, the term H(Y |X) is zero as Y is deterministic once the input to the network X is known, therefore I(X;Y ) is just H(Y ) in this case. Shannon entropy H(Y ) can be estimated using a standard discrete binning approach (Saxe et al., 2018). In our experiment, Y is the output of the softmax layer
which is a vector of length 10. Entropy H(Y ) is estimated using four bins per coordinate of Y by
Ĥ(Y ) = − N∑ i=1 pi log(pi), (7)
where pi is the probability that an output falls in the bin i. When noise is introduced to the weights, the conditional entropy H(Y |X) is estimated by fixing the input X = x and performing multiple noisy inferences to calculate Ĥ(Y |X = x) with the above binning approach. Ĥ(Y |X = x) is then averaged over different input x to obtain Ĥ(Y |X). This estimate is performed for LeNet and its variants with different noise levels. Results are shown in Figure 2(b). The values are normalized to the estimate of I(X;Y ) at zero noise. Mutual information between the input and the output decays towards zero with increasing noise in network weights. Furthermore, mutual information in a deeper and narrower network decays faster than in a shallower and wider network. Intuitively, information from the input undergoes more noisy compute when more layers are added to the network, while a wider network has more redundant paths for the information to flow, thus better preserving it. An information theoretic bound of mutual information decay as a function of network depth and width in a noisy neural network will be treated in our follow-up work. Overall, noise is damaging the learning capacity of the network. When the output of the model contains no information from its input, the network loses all ability to learn. For a noise level that is not so extreme, a significant amount of mutual information remains, which indicates that useful learning is possible even with a noisy model.
5 COMBINING NOISE INJECTION AND KNOWLEDGE DISTILLATION
5.1 METHODOLOGY
Noise injection during training is one way of exposing network training to a more realistic loss as randomly perturbing weights simulates what happens in a real noisy analog device, and forces the network to adapt to noise during training. Noise injection only happens in training during forward propagation, which can be considered as an approximation for calculating weight gradients with a straight-through-estimator (STE) (Bengio et al., 2013). At each forward pass, the weight Wl of layer l is drawn from an i.i.d. Gaussian distribution N (Wl;Wl0, σ2N,lI). The noise is referenced to the range of representable weights W lmax −W lmin in that particular layer
σN,l = η(W l max −W lmin), (8)
where η is a coefficient characterizing the noise level. During back propagation, gradients are calculated with clean weights Wl0, and only W l 0 gets updated by applying the gradient. W l max and W l min are hyperparameters which can be chosen with information on the weight distributions.
Knowledge distillation was introduced by Hinton et al. (2015) as a way for training a smaller student model using a larger model as the teacher. For an input to the neural network x, the teacher model generates logits zTi , which are then turned into a probability vector by the softmax layer
qTi = σ(z T i ;T ) = exp(zTi /T )∑ j exp(z T j /T ) . (9)
The temperature, T , controls the softness of the probabilities. The teacher network can generate softer labels for the student network by raising the temperature T . We propose to use a noise free clean model as the teacher to train a noisy student network. The student network is trained with noise injection to match a mix of hard targets and soft targets generated by the teacher. Logits generated by the student network are denoted as zSi . A loss function with distillation for the student model can be written as
L(x;WS;T ) = H(σ(zSi ;T = 1), ytrue) + αT 2H(σ(zSi ;T ), qTi ) +R(WS0). (10)
Here H is cross-entropy loss, ytrue is the one-hot encoding of the ground truth, and R is the L2regularization term. Parameter α balances relative strength between hard and soft targets. We follow the original implementation in Hinton et al. (2015), which includes a T 2 factor in front of the soft target loss to balance gradients generated from different targets. The student model is then trained
with Gaussian noise injection using this distillation loss function. The vanilla noise injection training corresponds to the case where α = 0. If the range of weights is not constrained and the noise reference is fixed, the network soon learns that the most effective way to decrease the loss is to increase the amplitude of the weights, which increases the effective SNR. There are two possible ways to deal with this problem. Firstly, the noise reference could be re-calculated after each weight update, thus updating the noise power. Secondly, we can constrain the range of weights by clipping them to the range [W lmin,W l max], and use a fixed noise model during training. We found that in general the second method of fixing the range of weights and training for a specific noise yields more stable training and better results. Therefore, this is the training method that we adopt in this paper. A schematic of our proposed method is shown in Figure 5 of the Appendix.
During training, a clean model is first trained to its full accuracy and then weight clipping is applied to clip weights in the range [W lmin,W l max]. The specific range is chosen based on statistics of the weights. Fine-tuning is then applied to bring the weight-clipped clean model back to full accuracy. This model is then used as the teacher to generate soft targets. The noisy student network is initialized with the same weights as the teacher. This can be considered as a warm start to accelerate retraining. As we discussed earlier, the range of weights is fixed during training, and the noise injected into the student model is referenced to this range.
Our method also supports training for low precision noisy models. Quantization reflects finite precision conversion between analog and digital domains in an analog accelerator. Weights are uniformly quantized in the range [W lmin,W l max] before being exposed to noise. In a given layer, the input activations are quantized before being multiplied by noisy weights. The output results of the matrix multiplication are also quantized before adding biases and performing batch normalization, which are considered to happen in digital domain. When training with quantization, the straight-throughestimator is assumed when calculating gradients with back propagation.
5.2 EXPERIMENTAL RESULTS
In order to establish the effectiveness of our proposed method, experiments are performed for different networks and datasets. In this section we mainly focus on bigger datasets and models, while results on LeNet and its variants with some discussion of network architecture effect can be found in Figure 6 of the Appendix. ResNets are a family of convolutional neural networks proposed by He et al. (2016), which have gained great popularity in computer vision applications. In fact, many other deep neural networks also use ResNet-like cells as their building blocks. ResNets are often used as industry standard benchmark models to test hardware performance. The first set of experiments we present consist of a ResNet-32 model trained on the CIFAR10 dataset. In order to compare fairly with the previous work, we follow the implementation in Joshi et al. (2019), and consider a ResNet32(v1) model on CIFAR10 with weight clipping in the range [−2σW,l, 2σW,l]. The teacher model is trained to an accuracy of 93.845% using stochastic gradient descent with cosine learning rate decay (Loshchilov & Hutter, 2016), and an initial learning rate of 0.1 (batch size is 128). The network is then retrained with noise injection to make it robust against noise. Retraining takes place for 150 epochs, the initial learning rate is 0.01 and decays with the same cosine profile. We performed two sets of retraining, one without distillation in the loss (α = 0), and another with distillation loss (α = 1). Everything else was kept equal in these retraining runs. Five different noise levels are tested with five different values of η: {0.02, 0.04, 0.057, 0.073, 0.11}. Results are shown in Figure 3(a). Every retraining run was performed twice and inference was performed 50 times on the test dataset for one model, to generate statistically significant results. Temperature was set to T = 6 for the runs with distillation. We found that an intermediate temperature between 2 and 10 produces better results. The pretrained model without any retraining performs very poorly at inference time when noise is present. Retraining with Gaussian noise injection can effectively recover some accuracy, which we confirm as reported in Joshi et al. (2019). Our method of combining noise injection with knowledge distillation from the clean model further improves noise resilience by about 40% in terms of η, which is an improvement of almost 2× in terms of noise power σ2N .
The actual noise level in a given device can only be estimated, and will vary from one device to another and even fluctuate depending on the physical environment in which it operates (Section 3). Therefore, it is important that any method to enhance noise robustness can tolerate a range of noise
levels. Our method offers improved noise robustness, even when the actual noise at inference time is different from that injected at training time. It is shown in Figure 3(b) that the model obtained from distillation is more accurate and less sensitive to noise level differences between training and inference time. This holds for a range of different inference noise levels around the training level. In the previous experiments, we assume a fixed noise level parameterized by η. On real analog hardware, there could be additional non-idealities such as variation in noise level due to temperature fluctuation and nonuniform noise profile on different NVM cells due to statistical variation in the manufacturing process. We have conducted additional experiments to account for these effects.
Results from the experiments are shown in Table 1. Temporal fluctuation represents noise level variation over time. Noise η is randomly sampled from N (η; η0, σ2η) for each inference batch. A noise temporal fluctuation level of 10% means that ση = 0.1η0. Spatial noise level fluctuation introduces nonuniform diagonal terms in the noise covariance matrix. More concretely, each weight noise in our previous model is multiplied by a scale factor λw with λw drawn from a Gaussian distribution N (λw; 1, σ2λ). A noise spatial fluctuation level of 10% means that σλ = 0.1. The scale factors are generated and then fixed when the network is instantiated, therefore the noise during network inference is non i.i.d. in this case. Results from our experiments show that there is no significant deviation when a combination of these non-ideal noise effects are taken into account.
The performance of our training method is also validated with quantization. A ResNet-18(v2) model is trained with quantization to 4-bit precision (ENOB) for both weights and activations. This corresponds to 4-bit precision conversions between digital and analog domains. A subset of training
data is passed through the full precision model to calibrate the range for quantization – we choose the 0.1% and 99.9% percentiles as qmin and qmax for the quantizer. This range of quantization is fixed throughout training. The quantized model achieves an accuracy of 92.91% on the test dataset when no noise is present. The model is then re-trained for noise robustness. The noise level is referenced to the range of quantization of weights in one particular layer, such that W lmin = qmin,l and W lmax = qmax,l. Results are shown for the same set of η values in Figure 4(a). In the distillation retraining runs, the full-precision clean model with an accuracy of 93.87% is used as the teacher and temperature is set to T = 6. Due to extra loss in precision imposed by aggressive quantization, accuracy of the pretrained quantized model drops sharply with noise. At η = 0.057, the model accuracy drops to 87.5% without retraining and further down to 80.9% at η = 0.073. Even retraining with noise injection struggles, and the model retrained with only noise injection achieves an accuracy of 90.34% at η = 0.073. Our method of combining noise injection and distillation stands out by keeping the accuracy loss within 1% from the baseline up to a noise level of η ' 0.07.
One interesting aspect of using distillation loss during retraining with noise can be seen in Figure 4(b). The evolution of model accuracy on the test dataset is shown. When no distillation loss is used, the model suffers an accuracy drop (difference between blue and orange curves) around 2.08% when tested with noise. The drop (difference between green and red curves) is significantly reduced to around 0.6% when distillation loss is used. This observation indicates that training with distillation favors solutions that are less sensitive to noise. The final model obtained with distillation is actually slightly worse when there is no noise at inference time but becomes superior when noise is present.
Results on the ImageNet dataset for a ResNet-50(v1) network are shown in Table 2 to demonstrate that our proposed approach scales to a large-scale dataset and a deep model. A ResNet-50 model is first trained to an accuracy of 74.942% with weight clipping in the range [−2σW,l, 2σW,l]. This range is fixed as the reference for added noise. For ResNet-50 on ImageNet, only three different noise levels are explored, and the accuracy degrades very quickly beyond the noise level η = 0.06, as the model and the task are considerably more complex. Retraining runs for 30 epochs with an initial learning rate of 0.001 and cosine learning rate decay with a batch size of 32. For distillation, we used α = 1 and T = 6 as in previous experiments. Results are collected for two independent training runs in each setting and 50 inference runs over the entire test dataset. The findings confirm that training with distillation and noise injection consistently delivers more noise robust models. The accuracy uplift benefit also markedly increases with noise.
6 DISCUSSION
Effects of distillation Knowledge distillation is a proven technique to transfer knowledge from a larger teacher model to a smaller, lower capacity student model. This paper shows, for the first time, that distillation is also an effective way to transfer knowledge between a clean model and its noisy
counterpart, with the novel approach of combining distillation with noise injection during training. We give some intuition for understanding this effect with the help of Section 4.2: a noisy neural network can be viewed as a model with reduced learning capacity by the loss of mutual information argument. Distillation is therefore acting to help reduce this capacity gap.
In our experiments, distillation shows great benefit in helping the network to converge to a good solution, even with a high level of noise injected in the forward propagation step. Here, we attempt to explain this effect by the reduced sensitivity of distillation loss. An influential work by Papernot et al. (2016) shows that distillation can be used to reduce the model sensitivity with respect to its input perturbations thus defending against some adversarial attacks. We argue that distillation can achieve a similar effect for the weights of the network. Taking the derivative of the i-th output of the student network qSi at temperature T with respect to a weight w yields
∂qSi ∂w = 1 T exp(zi/T )(∑ j exp(zj/T ) )2 ∑ j exp(zj/T ) ( ∂zi ∂w − ∂zj ∂w ) . (11)
The 1/T scaling makes the output less sensitive to weight perturbation at higher temperature, thus potentially stabilizing the training when noise is injected into weights during forward propagation. We plan to work on a more formal analysis of this argument in our future work.
Hardware Performance Benefits The improvements in noise tolerance of neural networks demonstrated in this work have a potential impact on the design of practical analog hardware accelerators for neural network inference. Increased robustness to noisy computation at the model training level potentially means that the specification of the analog hardware can be relaxed. In turn, this can make it easier to achieve the hardware specification, or even allow optimizations to further reduce the energy consumption. An in-depth discussion of the trade-off between compute noise performance and hardware energy dissipation is beyond the scope of this paper, but we refer the interested reader to Rekhi et al. (2019) for more details. In summary, we believe that machine learning research will be a key enabler for practical analog hardware accelerators.
7 CONCLUSION
Analog hardware holds the potential to significantly reduce the latency and energy consumption of neural network inference. However, analog hardware is imprecise and introduces noise during computation that limits accuracy in practice. This paper explored the training of noisy neural networks, which suffer from reduced capacity leading to accuracy loss. We propose a training methodology that trains neural networks via distillation and noise injection to increase the accuracy of models under noisy computation. Experimental results across a range of models and datasets, including ImageNet, demonstrate that this approach can almost double the network noise tolerance compared with the previous best reported values, without any changes to the model itself beyond the training method. With these improvements in the accuracy of noisy neural networks, we hope to enable the implementation of analog inference hardware in the near future. | 1. What is the main contribution of the paper, and how does it relate to previous works?
2. How does the proposed method combine noise injection and knowledge distillation?
3. Can the authors provide more clarity on the derivation of the new loss function (10)?
4. Are there any minor errors or typos in the paper that need correction? | Review | Review
The manuscript illustrates how a noisy neural network can reduce the learning capacity. To mitigate this loss, the authors propose a method that combines the method of "noise injection and "knowledge distillation". However, from a conceptual point of view, their contribution (i.e. (10) in Section 5,) is unclear to me. Specifically, the authors are not precise about how do they merge the aforementioned previous ideas and come up with the new loss function (10).
Minor comment: Please correct (7). |
ICLR | Title
Towards Lightweight, Model-Agnostic and Diversity-Aware Active Anomaly Detection
Abstract
Active Anomaly Discovery (AAD) is flourishing in the anomaly detection research area, which aims to incorporate analysts’ feedback into unsupervised anomaly detectors. However, existing AAD approaches usually prioritize the samples with the highest anomaly scores for user labeling, which hinders the exploration of anomalies that were initially ranked lower. Besides, most existing AAD approaches are specially tailored for a certain unsupervised detector, making it difficult to extend to other detection models. To tackle these problems, we propose a lightweight, model-agnostic and diversity-aware AAD method, named LMADA. In LMADA, we design a diversity-aware sample selector powered by Determinantal Point Process (DPP). It considers the diversity of samples in addition to their anomaly scores for feedback querying. Furthermore, we propose a model-agnostic tuner. It approximates diverse unsupervised detectors with a unified proxy model, based on which the feedback information is incorporated by a lightweight non-linear representation adjuster. Through extensive experiments on 8 public datasets, LMADA achieved 74% F1-Score improvement on average, outperforming other comparative AAD approaches. Besides, LMADA can also achieve significant performance boosting under any unsupervised detectors.
1 INTRODUCTION
Anomaly detection aims to detect the data samples that exhibit significantly different behaviors compared with the majority. It has been applied in various domains, such as fraud detection (John & Naaz, 2019), cyber intrusion detection (Sadaf & Sultana, 2020), medical diagnosis (Fernando et al., 2021), and incident detection (Wang et al., 2020). Numerous unsupervised anomaly detectors have been proposed (Zhao et al., 2019; Boukerche et al., 2020; Wang et al., 2019). However, practitioners are usually unsatisfied with their detection accuracy (Das et al., 2016), because there is usually a discrepancy between the detected outliers and the actual anomalies of interest to users (Das et al., 2017; Zha et al., 2020; Siddiqui et al., 2018). To mitigate this problem, Active Anomaly Discovery (AAD) (Das et al., 2016), is proposed to incorporate analyst’s feedback into unsupervised detectors so that the detection output better matches the actual anomalies.
The general workflow of Active Anomaly Discovery is shown in Fig.1. In the beginning, a base unsupervised anomaly detector is initially trained. After that, a small number of samples are selected to present to analysts for querying feedback. The labeled samples are then utilized to update the detector for feedback information incorporation. Based on the updated detection model, a new set of samples are recommended for the next feedback iteration. Finally, the tuned detection model is ready to be applied after multiple feedback iterations, until the labeling budget is exhausted.
Despite the progress of existing AAD methods (Das et al., 2017; Zha et al., 2020; Siddiqui et al., 2018; Keller et al., 2012; Zhang et al., 2019; Li et al., 2019; Das et al., 2016), some intrinsic limitations of these approaches still pose great barriers to their real-world applications. Firstly, most AAD methods adopt the top-selection strategy for the feedback querying (Das et al., 2017; Zha et al., 2020; Siddiqui et al., 2018; Li et al., 2019), i.e., the samples with the highest anomaly scores are always prioritized for user labeling. However, it hinders exploring the actual anomalies that are not initially scored highly by the base detector. As such, these AAD approaches are
∗Qingwei Lin is the corresponding author.
highly susceptible to over-fitting to the top-ranked samples, resulting in a suboptimal recall with respect to all anomalies. We shall demonstrate this with a real example in Sec. 2.1. Secondly, most existing AAD approaches (Das et al., 2017; 2016; Siddiqui et al., 2018) are tightly tailored for a certain kind of detection model, making it difficult to extend to other unsupervised detectors.
They need to modify the internal structure of a particular type of unsupervised detector, endowing them with the ability of feedback integration. Therefore, it is impractical and ad-hoc to re-design them each time facing such a variety of unsupervised detection models. Recent AAD methods (Zha et al., 2020; Li et al., 2019)
attempted to generalize to arbitrary detectors. However, they can barely scale because their mode size grows with the number of samples.
To tackle these problems in AAD, we propose a Lightweight, Model-Agnostic and Diversity-Aware active anomaly detection approach, named LMADA. It consists of two components, i.e, sample selector (for sample selection) and model tuner (for feedback incorporation). In the sample selector, we take the anomaly scores as well as the diversity of samples into account, instead of solely picking up the most anomalous ones for feedback querying. Specifically, we fuse anomaly scores and the feedback repulsion scores into a diversity-aware sampling technology powered by Determinantal Point Processes (DPP) (Chen et al., 2018; Kulesza et al., 2012). In the model tuner, we first leverage a neural network as the proxy model to approximate an arbitrary unsupervised detector. After that, we fix the weights of the proxy model and learn a representation adjuster on top of it. The representation adjuster is responsible for transforming the input feature vector to fit the feedback-labeled samples. Finally, each sample to be detected is transformed by the representation adjuster and then fed back to the base detector to estimate its anomaly score. In this way, the model tuner shields the details of different unsupervised detectors and achieves lightweight feedback incorporation, only via a non-linear representation transformation.
We conducted extensive experiments on 8 public AD datasets to evaluate the effectiveness of our proposed method. The experimental results show that LMADA can achieve 74% F1-Score improvement on average, outperforming other comparative AAD approaches under the same feedback sample budget. In addition, we also validated that LMADA works well under various unsupervised anomaly detectors.
2 RELATED WORK AND MOTIVATION
In this section, we will give a brief introduction to the existing AAD work and analyze their limitations from two aspects: (1) sample selection and (2) feedback incorporation.
2.1 SAMPLE SELECTION
Most AAD approaches (Siddiqui et al., 2018; Das et al., 2017; Zha et al., 2020; Li et al., 2019; Das et al., 2016) adopt the top-selection strategy. The anomalous samples, that are not ranked on the top initially by the base detector, would have little chance to be selected for feedback, and therefore can hardly be recalled subsequently. We show a real example using KDD-99 SA1, which is a famous intrusion detection dataset. The dataset contains one normal class (96.7%) and 11 anomalous classes (3.3%) of various intrusion types. We applied the Isolation Forest (Liu et al., 2012) detector (a widely accepted one) to this dataset and found that the recall was around 0.28. We show the anomaly score distribution for the normal samples and three major intrusion types, respectively, in Fig. 2. Only the samples of two intrusion types, i.e., “neptune” and “satan”, are assigned high anomaly scores (0.60 ∼ 0.70). However, the samples of another major intrusion type “smurf” (accounts for 71.27% of all anomalous samples) are assigned relatively low anomaly scores (0.50 ∼ 0.55), which is even below the anomaly scores of many normal samples (4168 normal samples vs. 15 “smurf” anomalies were assigned anomaly scores over 0.55). Under this circumstance, selecting the top samples only for feedback can hardly improve the recall for the “smurf” type. In LMADA, we consider both anomaly scores as well as the diversity of samples during the sample selection. In this way, samples
1https://archive.ics.uci.edu/ml/machine-learning-databases/kddcup99-mld/kddcup.data.gz
not initially ranked on the top, like the “smurf” anomalies in our example, can have an opportunity to present to analysts.
2.2 FEEDBACK INCORPORATION
How to incorporate feedback information is another focus of AAD. Das et al.(Das et al., 2017) added a set of adjustable weights to the random projections generated by LODA detector (Pevnỳ, 2016), by which the feedback can be incorporated. They also modified Isolation Forest (Liu et al., 2012) by adding weights to the tree paths, re-weighting the isolation score based on the feedback (Das et al., 2016). Siddiqui et al.(Siddiqui et al., 2018) extended the re-weighting strategy to the Generalized Linear Anomaly Detectors (GLAD) with the help of online convex optimization (Hazan et al., 2016). iRRCF-Active (Wang et al., 2020) also borrowed the above similar idea into iRRCF detector (Guha et al., 2016). In summary, the above methods require tailoring the weights specific to the certain model structure of different unsupervised detectors and then adjusting the weights with feedback-labeled samples by gradient descent. However, it is impractical for such a diverse range of unsupervised detectors as the modification is sophisticated and case-by-case. In LMADA, we propose a model-agnostic method to incorporate feedback information, regardless of the type of unsupervised detectors.
We also note that some AAD approaches have been proposed and attempted to support arbitrary base detectors. Meta-AAD (Zha et al., 2020) first extracts a set of transferable features based on k-neighbors to labeled instances and feeds them into a pre-trained meta-policy model for detection. GAOD (Li et al., 2019) leverages label spreading (Zhou et al., 2003), a graph-based semi-supervised model, to iteratively spread label information to neighbors. In summary, both AAD methods leverage neighborhoods of labeled instances to exploit feedback information but require persisting the entire dataset for neighboring sample retrieval. Therefore, the final tuned detection model would become increasingly heavier and heavier. In this paper, the feedback incorporation of LMADA is achieved by only a non-linear transformation, which is lightweight enough for real-world application.
3 APPROACH
In this section, we will elaborate on the details about LMADA. Following the general AAD workflow shown in Fig.1, LMADA consists of two components, i.e., sample selector and model tuner. In the sample selector, we consider the diversity in addition to the anomaly scores when recommending valuable samples for labeling. In the model tuner, we proposed a model-agnostic strategy to incorporate feedback information for arbitrary unsupervised detectors. It is achieved in a lightweight manner, only relying on a simple non-linear transformation.
3.1 SAMPLE SELECTOR
As discussed in Sec. 2.1, sample selection of AAD should consider the diversity of the selected samples in addition to the anomaly scores. The diversity here is not in terms of anomaly scores but in the distribution of the samples. In summary, our attempt is to select a subset of samples with high anomaly scores, and meanwhile, are dissimilar from each other. We use the example shown in Fig. 3 to illustrate this idea. There are two types of anomalies A and B that stray from the majority of samples. The anomaly scores (based on the Isolation Forest) are indicated by the colors. The
deeper the color, the higher the anomaly score. The selected samples are indicated by the blue cross markers. The number of selected samples is fixed as 20. Type-B anomalies are assigned relatively lower anomaly scores compared with type-A because they are more adjacent to the normal samples.
If we use the top-selection strategy, the selected samples would mostly come from type-A (as shown in the left subfigure of Fig.3), which may not cover the other types of anomalies. Therefore, the feedback would not help the AAD to recall more anomalies, e.g., type-B in this example. The desired sample selection is shown in the right subfigure of Fig.3, where the selector achieves a good coverage for all samples with relatively high anomaly scores. In this way, we can enhance the anomaly scores of all anomaly types, instead of only those originally ranked high by the base detector.
Inspired by (Chen et al., 2018), we leverage a widely-adopted diversity sampling method, i.e., Determinantal Point Processes (DPP) (Kulesza et al., 2012), to achieve the above sampling target. We first introduce DPP in Sec. 3.1.1, and then describe how we balance the dual objectives, i.e., anomaly score and diversity, in Sec. 3.1.2.
3.1.1 DETERMINANTAL POINT PROCESSES (DPP)
The Determinantal Point Process (DPP) was originally introduced from fermion systems in thermal equilibrium (Macchi, 1975; Chen et al., 2018). Recently, it has been successfully applied to various machine learning tasks, e.g., image search (Kulesza & Taskar, 2011a), document summarization (Kulesza & Taskar, 2011b) and recommendation systems (Gillenwater et al., 2014). Given a dataset D = {s1, s2, ..., sn}, DPP aims to select a subset C from D. Specifically, DPP constructs a real positive semidefinite (PSD) kernel matrix L ∈ Rn×n derived from D. For each subset C ⊆ D, the probability of selecting C from D, denoted as P (C), is proportional to det(LC), where det(LC) is the determinantal value of the principal minor LC . The objective of DPP is to derive C∗ which maximizes the value of det(LC), shown in Eq.1. As an example, to achieve maximum diversity, the kernel matrix could be constructed as the pairwise similarity matrix (Kulesza et al., 2012).
C∗ = argmaxC⊆Ddet(LC) (1)
How to approximately solve this NP-hard problem (Ko et al., 1995) has been well studied in (Gillenwater et al., 2012; Han et al., 2017; Li et al., 2016; Chen et al., 2018) and we adopt the greedy algorithm proposed in (Chen et al., 2018) in our paper. We will introduce how to construct a specially tailored kernel matrix L for AAD in the next section.
3.1.2 KERNEL MATRIX CONSTRUCTION
In LMADA, we construct a kernel matrix L, whose entries can be formally written as Eq.2,
Lij = ⟨airisi, ajrjsj⟩ = aiajrirj ⟨si, sj⟩ (2)
where ai denotes the anomaly score uniformly re-scaled in the range of [0, 1]. It is used to motivate DPP to select samples with high anomaly scores. Meanwhile, we need to select diverse samples within and across feedback iterations. In each feedback iteration, the inner product ⟨si, sj⟩ measures the pairwise similarity of all candidate samples, based on which DPP prefers dissimilar samples (Kulesza et al., 2012). As there are multiple feedback iterations, we expect the samples selected in the current iteration are also different from those sampled in previous iterations. To achieve so, we maintain a data pool P preserving the selected samples from the previous feedback iterations. The minimum distance between a candidate sample si and the selected samples cached in P , is defined as the feedback repulsion score ri, as shown in Eq.3.
ri = min({1−⟨si, sk⟩ |∀sk ∈ P}) (3)
From Eq.2, we can conclude that det(LC) is proportional to aiajrirj and is inversely proportional to ⟨si, sj⟩ among the selected samples in C. In this way, it induces DPP to select more anomalous
(i.e., higher aiaj) data points that are not adjacent to the previously selected examples (i.e., higher rirj). Meanwhile, the data points are also distinguish enough from each other (i.e., lower ⟨si, sj⟩). The qualitative analysis can be referred to Appendix Sec.A.1.
Theoretically, the complexity of constructing L is O ( n2 ) , which is expensive for a large dataset. However, anomalous samples generally account for a small percentage of the whole dataset compared with the normal class (Zhao et al., 2019; Boukerche et al., 2020). For the instance in KDD99SA dataset introduced in Sec.2.1, only 3.3% of samples belong to anomalies. It is unnecessary to regard all samples as candidates for the sample selector. Consequently, we construct the kernel matrix with only the pre-truncated top α% samples ranked by their anomaly scores. In general, if α is small enough (e.g., < 3%), the selected samples would be those with the highest anomaly scores, i.e., similar to the top-selection. On the other hand, if α is large (e.g., > 30%), the selected samples would become too diverse to retrieve samples worthwhile for feedback. We will evaluate different α settings in Appendix Sec.A.8.
3.2 MODEL TUNER
After labeling the examples recommended by the sample selector, the model tuner focuses on how to incorporate newly labeled data points. The model tuner should be agnostic to the base unsupervised detectors. In other words, any unsupervised detection model can be easily integrated into our framework. To achieve this goal, we propose a three-phases model tuner in LMADA, as shown in Fig. 4. Firstly, we set up a neural network as the proxy model (Coleman et al., 2019) to mimic the behaviors of diverse base detectors. After that, a representation adjuster is added in front of the frozen proxy model to get trained based on the labeled samples. Finally, the tuned representation adjuster is used to transform the original samples into new representation vectors, which will be fed back to the base detector for re-scoring. The feedback continues for multiple iterations until the sampling budget is exhausted. The tuned representation adjuster can be applied as illustrated in the Phase-3 of Fig.4. Given a testing sample si, we first transform it into a new representation vector hi via the representation adjuster Ω (si). Then we directly feed hi into the base anomaly detector f and get the final detection results f (hi). In this way, LMADA achieves feedback incorporation in a lightweight manner, only with a non-linear representation transformation.
3.2.1 PROXY MODEL APPROXIMATION
As introduced in Sec. 2.2, unsupervised detectors of various types pose a great challenge to modelagnostic AAD. There are significant differences between the model structures of different unsupervised detectors. Most existing AAD work (Siddiqui et al., 2018; Das et al., 2017; 2016; Wang et al., 2020) needs to specifically modify the internal structure of unsupervised detectors.
To tackle this problem, we utilize a deep neural network as the proxy model to approximate the behaviors of diverse unsupervised detectors. In this way, we can turn unsupervised detectors into gradient-optimizable neural networks, which facilitate the subsequent representation adjuster tuning (more details presented in Sec.3.2.2). As shown in Phase-1 of Fig. 4, we use the normalized anomaly scores f(si) generated by the base detector as the pseudo-labels and set up a neural network Φ in parallel to fit them. The proxy model is composed of one input layer and multiple hidden layers followed by an output layer activated by the sigmoid function. The Mean-
Squared-Error (MSE) is adopted as the loss function during proxy model training, as shown in Lproxy = ∑b i=1 (Φ (si)− f (si)) 2, where b denotes the batch size.
After the proxy model training, the anomalous patterns that are captured by the base detectors have been learned by the proxy model, i.e., the proxy anomaly scores Φ (si) ≈ f (si). The key point here is that the internal structures of different unsupervised detectors do not need to be considered in this training process.
3.2.2 REPRESENTATION ADJUSTER TUNING
In Phase-2, we devise a representation adjuster Ω in front of the proxy model to incorporate the feedback information. The representation adjuster is a simple non-linear transformation layer, which takes the original sample vector si as the input and transforms it into a new feature space but with the same dimensions, i.e., hi = Ω(si) = sigmoid (Wsi),where hi ∈ Rd and si ∈ Rd. As shown in the middle of Fig.4, the transformed hi will be fed into the trained proxy model Φ and generate the proxy anomaly score Φ (hi). Based on that, W will be updated under the loss function in Eq.4. The representation adjuster can be trained by a gradient descent optimizer because the subsequent proxy model (as shown in Fig. 4) is also a neural network. The parameters of the proxy model are frozen during the representation adjuster tuning phase.
Ladjuster = Lfeedback + Lconsolidation + η (4)
Ladjuster is composed of three components, i.e., feedback loss, consolidation loss and a regularization item η. Lfeedback is used to fit the labeled samples in the data pool P , as shown in Eq.5, where yi represents the feedback label (+1 for the anomalous class and -1 for the normal class) for the sample si.
Lfeedback = − b∑
i=1
yi ∗ log (Φ (hi)) ,∀si ∈ P (5)
Training with only a few labeled samples would make the representation adjuster biased toward the feedback labels but ignore the patterns already learned from the base detector. So we design another component inspired by (Li & Hoiem, 2017), i.e., Lconsolidation , that serves for consolidating the knowledge of the base unsupervised detector, as shown in Eq.6. h̃i denotes the transformed sample representation in the last feedback iteration (h̃i = si in the first feedback iteration). It forces the proxy anomaly scores Φ (hi) of the remaining unlabeled samples to be stabilized around the original anomlay scores f ( h̃i ) in the newly transformed feature space. We note that Lconsolidation
is not conducive to fitting Lfeedback as the former tends to remain the original representation. To achieve a trade-off between them, we assign a weight for the consolidation loss of each sample. Intuitively, if an unlabeled sample si is similar to the labeled samples in the feedback data pool P , its consolidation loss should have a lower weight, reducing the constraints for fitting Lfeedback . On the contrary, those unlabeled samples, which are unlike the data points in P , should be assigned a higher weight to enhance the influence of the consolidation loss. This intuition is fully aligned with the feedback repulsion score ri introduced in Sec.3.1.2 and we thus use it as the weight of consolidation loss.
Lconsolidation = b∑
i=1
ri ∗ ( Φ (hi)− f ( h̃i ))2 ,∀si /∈ P (6)
The last component is the penalty for feature space transformation because the extremely dramatic change to the original sample vectors is undesired. To achieve so, we set η as ∑b i=1 ||hi − si||2. More training details for the representation adjuster can be found in Appendix Sec.A.2.
4 EXPERIMENT
4.1 DATASETS AND SETTINGS
We evaluated our proposed method on 8 public datasets, including PageBlocks, Annthyroid, Cardio, Cover, KDD99-Http, Mammography, KDD99-SA, Shuttle, which are widely used by existing AAD
approaches (Siddiqui et al., 2018; Zha et al., 2020; Li et al., 2019; Das et al., 2017; 2019). The details of these datasets can be found in Appendix Sec. A.3. We run 5 feedback iterations and query 20 samples in each iteration. Same as the existing work, we used simulation instead of real user feedback since all the ground truth is known for these public datasets. The experimental environment and the parameters setting can be found in Appendix Sec. A.4 and Sec. A.5, respectively.
4.2 COMPARISON METHODS AND METRICS
We compared LMADA with three state-of-the-art AAD methods, i.e., FIF (Siddiqui et al., 2018), Meta-AAD (Zha et al., 2020), and GAOD (Li et al., 2019). FIF adds a set of weights to the tree branches of the Isolation Forest detector and tunes them via online convex optimization with feedback information. GAOD utilizes the semi-supervised method (label spreading Zhou et al. (2003)) to consume user feedback. Both of the above approaches adopt the top-selection strategy. Meta-AAD extracts a set of transferable features for a pre-trained meta-policy detection model, considering both long-term and short-term benefits in querying feedback.
We use F1-Score Curve to evaluate the effectiveness of different AAD methods. Specifically, we calculate F1-Score on the entire dataset after finishing an iteration of feedback. Besides, we also calculate the Area-Under-Curve (AUC) (Ling et al., 2003) of the F1-Score Curve.
4.3 COMPARISON EXPERIMENT RESULTS
We compared our proposed method with three state-of-the-art AAD approaches and the results are illustrated in Fig. 5. For fairness, we used Isolation Forest as the base detector because it was adopted by all the comparison methods (Zha et al., 2020; Siddiqui et al., 2018; Li et al., 2019). To ensure reproducibility, we repeated our experiments 10 times on each dataset and plotted the average F1Score and the standard error bar (Altman & Bland, 2005). The AUC value of each F1-Score Curve is shown in the legend.
From the results, we can confirm that LMADA performs better than other AAD methods. With 20 feedback samples per iteration, LMADA achieved consistently higher F1-Score on most datasets. Especially on KDD99-SA, Cover, and Cardio datasets, LMADA boosted the F1-Score of the base detector by an average of 144% to 0.80+ after 5 feedback iterations. For PageBlocks, Annthyroid, and Mammography datasets, LMADA also increased the F1-Score by 60% on average, significantly outperforming other AAD models. As for the KDD99-Http and Shuttle dataset, we can see that the initial performance of the base detector has reached a relatively high level. Under this circumstance, LMADA also can hold a high detection accuracy, exhibiting its robustness.
Among the comparison methods, Meta-AAD performed much better than the other two because it utilizes reinforcement learning to learn a meta-policy for feedback querying, rather than simply picking up the samples with the highest anomaly scores. However, the diversity of samples is not taken into account explicitly, resulting in relatively lower performance compared with LMADA (e.g. 0.29 AUC of Meta-AAD vs. 0.87 AUC of LMADA in KDD99-SA dataset). FIF and GAOD even had difficulty preserving the upward trend of their F1-Score curves, although more feedback samples were added. As we discussed in Sec.2.1, the top-selection strategy of both methods hinders the exploration of the lower-ranked anomalous samples. Moreover, their detectors were tuned to over-fit the scarce feedback-labeled samples, leading to a decreasing recall. We have verified this in Appendix Sec. A.9.
4.4 MODEL-AGNOSTIC EVALUATION
We target to propose a model-agnostic AAD approach, which can be easily extended to arbitrary unsupervised detectors. As such, we evaluated the effectiveness of LMADA under five different but commonly-used unsupervised detectors, including AutoEncoder (Vincent et al., 2010), PCA (Shyu et al., 2003), OCSVM (Schölkopf et al., 2001), LODA (Pevnỳ, 2016; Das et al., 2016), and IF. The experimental settings are the same as that in Sec.4.3 and the results are shown in Fig. 6.
From these figures, we can conclude that LMADA works well on different unsupervised detectors. It can consistently improve the F1-Score on all eight datasets whatever the base detector is adopted. More than that, we also found that the performance gains achieved by LMADA vary with different unsupervised detectors. Taking the KDD99-Http dataset as an example, we can see that LODA performs much worse than the other base detectors at the beginning (F1-Score 0.02 compared to ∼0.82 of the other detectors). Even so, LMADA was also able to improve the performance of LODA from 0.02 to 0.96 after 5 iterations. We also noted that the variance of its results is significantly larger than the others. The reason is that LODA is inaccurate and unstable on KDD99-Http dataset, making it difficult to provide effective information for the sample selector and the model tuner. These experiment results confirm that the initial performance of base detectors has a great influence to AAD approaches.
4.5 SAMPLE SELECTOR VALIDATION
In this section, we validated the effectiveness of our proposed sample selector in LMADA. As we discussed in Sec. 2.1, diversity plays a critical role in AAD. In order to verify this point, we conducted an ablation study on the KDD99-SA dataset. In this dataset, 11 anomalous classes and the normal class are well annotated separately so that we can study how samples would be selected by different sampling strategies. We compared our proposed sampling method with the commonly-used
top-selection strategy (Das et al., 2017; 2016; Siddiqui et al., 2018), and the stratified sampling described in (Guha et al., 2016) (i.e., divide samples into g groups based on their anomaly scores and then select examples randomly from each group). The model tuner is fixed. The selected anomalous classes under these settings and their corresponding improved F1-Scores are shown in Fig. 7(a) and Fig. 7(b), respectively.
From Fig.7(a), we can see that the sample selector of LMADA is able to cover more anomaly classes, compared with the other two sampling strategies. Furthermore, we also confirm the necessity of the diversity-aware selection from Fig.7(b) since our sample selector achieved much higher F1-Scores than those under the top-selection or the stratified sampling methods. For example, in the first feedback iteration, our proposed sample selector chose “smurf” samples (shown in blue color) for feedback, which were missed by the other two. As we stated in Sec.2.1, “smurf” samples were not assigned high anomaly scores by the base detector (IF) but they actually account for 71.27% of all anomalies. Therefore, we can see that F1-Score can be significantly improved from 0.28 to 0.94 with labeled “smurf” anomalies, while the other two strategies failed to achieve this high F1-Score. The complete results on all datasets can be found in Appendix Sec. A.6.
4.6 MODEL TUNER VALIDATION
In this section, we will present the effectiveness of our proposed model tuner. As introduced in Sec.3.2, the transformed representations hi are trained based on the proxy model but will be fed back to the base unsupervised detector to get the final anomaly scores. We aim to study how large the difference between the anomaly scores generated by the base detector f (hi) and the proxy model Φ (hi), respectively. We also conducted this ablation experiment on the KDD99-SA dataset and the results are exhibited in Fig. 7(c).
This figure shows that there is only a narrow gap in F1-Scores between the proxy model (green line) and the base unsupervised detector (red line). It manifests that the proxy model has captured the knowledge learned by the base detection method as they produced similar anomaly scores. As such, the transformed representations hi trained via the proxy model can be smoothly transferred to the base unsupervised detector. The complete experimental results on all datasets can be referred to Appendix Sec. A.7.
5 CONCLUSION
In this paper, we propose LMADA, a lightweight, model-agnostic and diversity-aware active anomaly detection method. In the sample selector of LMADA, we take the anomaly scores as well as the diversity of samples into account, unlike most existing AAD work that solely picks the most anomalous ones for feedback querying. In the model tuner of LMADA, we propose a model-agnostic strategy to incorporate feedback information, regardless of the type of unsupervised detector. It can be achieved by a lightweight non-linear transformation. Through the extensive evaluation on 8 public AD datasets, we show that LMADA can achieve 74% F1-Score improvement on average, significantly outperforming other comparative AAD approaches.
A APPENDIX
A.1 THE QUALITATIVE ANALYSIS OF EXTENDED DPP IN SAMPLE SELECTOR
The kernel matrix L is shown as Eq.2. As introduced in Sec.3.1.1, we aim to select a subset C with highest det (LC). The principal minor LC is as follows. a21r 2 1 ⟨s1, s1⟩ · · · a1ajr1rj ⟨s1, sj⟩ · · · a1a|C|r1r|C| 〈 s1, s|C| 〉 a2a1r2r1 ⟨s2, s1⟩ · · · a2ajr2rj ⟨s2, sj⟩ · · · a2a|C|r2r|C| 〈 s2, s|C| 〉 ... . . . ... . . . ... aia1rir1 ⟨si, s1⟩ · · · aiajrirj ⟨si, sj⟩ · · · aia|C|rir|C| 〈 si, s|C| 〉 ... . . . ... . . . ...
a|C|a1r|C|r1 ⟨sc, s1⟩ · · · a|C|ajr|C|rj 〈 s|C|, sj 〉 · · · a2|C|r 2 |C| 〈 s|C|, s|C| 〉
(7)
The det (LC) can be calculated in Eq 8. det (LC) = ∑ (−1)τ(p1,p2,...p|C|) L1p1L2p2 · · ·L|C|p|C| (8)
where p1, p2, ...p|C| denote all permutations of {1, 2, . . . |C|}, and τ ( p1, p2, . . . p|C| ) represents the reverse order number of p1, p2, ...p|C|. According to Eq.2, det (LC) can be further expanded as Eq. 9
det (LC) = |C|∏ i=1 a2i r 2 i ∑ (−1)τ(p1,p2,...p|C|) ⟨s1, sp1⟩ ⟨s2, sp2⟩ · · · 〈 s|C|, sp|C| 〉 (9)
= |C|∏ i=1 a2i r 2 i · ∣∣∣det([s1, s2, ...s|C|]⊤ [s1, s2, ...s|C|])∣∣∣ (10) =
|C|∏ i=1 a2i r 2 i · ( s1 ⊗ s2 ⊗ ...⊗ s|C| )2 = |C|∏ i=1 a2i r 2 i · V 2 (11)
|C|∏ i=1 a2i r 2 i is the common factor extracted from det (LC). As such, we can conclude that det (LC) is proportional to ai and ri, inducing DPP to select samples that have high anomaly scores and are different from those have already been selected in the data pool P .
The second term, ∑ (−1)τ(p1,p2,...p|C|) ⟨s1, sp1⟩ ⟨s2, sp2⟩ · · · 〈 s|C|, sp|C| 〉 , can be further rewrote
as the exterior product form ( s1 ⊗ s2 ⊗ ...⊗ s|C| )2 shown in Eq.11. According to the definition of exterior product (Browne, 2012), it geometrically represents the volume V of the parallel polyhedron spanned by vectors {s1, s2, ...s|C|}. Consequently, the more dissimilar they are, the larger the volume V of the spanned polyhedron is, the larger det (LC) is.
A.2 LABELED SAMPLES OVERSAMPLING
In the model tuner, we use the labeled samples to train the representation adjuster. Nevertheless, compared to the unlabeled samples, the feedback-labeled samples only account for a tiny percentage of the overall dataset (e.g., 20 samples per iteration vs. 286048 samples in total of the Cover dataset). Therefore, we need to over-sample the labeled samples in each training batch to improve the utilization of such a few feedback samples, so that we can fully exploit the feedback information and accelerate the loss convergence. Half of each training batch are labeled samples, which are repeatedly drawn from the data pool P , and the other half are unlabeled samples, which are randomly sampled from the all unlabeled samples.
A.3 DATASETS INFORMATION
We used eight public datasets for the evaluation. PageBlocks, Annthyroid, Cardio, Cover, Mammography, Shuttle are available in ODDS 2. KDD99-Http and KDD99-SA are available in UCI Machine
2http://odds.cs.stonybrook.edu/
Datasets Samples Dimension Anomaly Number Anomaly Rate
Learning Repository3. PageBlocks can be referred to ADBench 4. The detailed information of these datasets is shown in Table.1. The number of samples ranges from 1.8K to 286K and the anomaly rate is spanning from 0.96% to 9.61%.
A.4 EXPERIMENT ENVIRONMENT
We built LMADA based on PyTorch 1.12.0 (Paszke et al., 2019) and used base unsupervised anomaly detectors implemented in PyOD 1.0.3 (Zhao et al., 2019). In our experiments, we set up a Virtual Machine (VM) with 64 Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz processors and 256GB RAM. The operating system is Ubuntu-20.04. In the VM, we had an NVIDIA Tesla M40 GPU with CUDA 11.4 for deep learning model training.
A.5 EXPERIMENT SETTING DETAILS
LMADA: For the sample selector of LMADA, we set the pre-truncation rate α = 10%. We introduce two hyper-parameters λ and γ to adjust the preference of anomaly score and diversity (Lij = (aiaj) λ (rirj ⟨si, sj⟩)γ). In the experiments, we set λ = 1 and γ = 1. In the model tuner, we utilized the Adam optimizer (Kingma & Ba, 2014) and set the epoch number to 10, the learning rate to 0.01, and the batch size to 512, for both the proxy model approximation phase and the representation adjuster tuning phase. The size of the proxy model hidden layer is set to 64. Specifically for SA dataset, we performed dimension reduction (Carreira-Perpinán, 1997) on it because it is characterized by its high feature dimensions and sparsity.
Meta-AAD: We used the source code available in the link provided by the original paper5. We utilized 12 datasets (including toy, yeast, glass, ionosphere, lympho, pima, thyroid, vertebral, vowels, wbc, wine, yeast) for meta-policy training in our experiment. All the datasets are available in the released code repository 6. After that, we directly applied the trained meta-policy to the targeted 8 public datasets. We borrowed the the default settings from the original paper in our experiments: rollout steps T = 128, entropy coefficient c2 = 0.01, learning rate lr = 2.5× 10−4, value function coefficient c1 = 0.5, λ = 0.95, clip range ϵ = 0.2, balance parameter γ = 0.6.
FIF: We used the source code released in the link provided by the original paper 7. We chose the Log-Likelihood loss function for FIF in the experiment. We set the type of regularizer w = 2 and the learning rate a = 1.
GAOD: We implemented GAOD according to Li et al. (2019) by ourselves because lacking the released source code. We set the number of nearest neighbors k = 30 and the learning rate of label spreading α = 0.995. The standard deviation of Gaussian function σ is set to half of the 95-percentile of k-th nearest neighbor distances.
3https://archive.ics.uci.edu/ml/machine-learning-databases/kddcup99-mld/kddcup.data.gz 4https://github.com/Minqi824/ADBench 5https://github.com/daochenzha/Meta-AAD 6https://github.com/daochenzha/Meta-AAD/tree/master/data 7https://github.com/siddiqmd/FeedbackIsolationForest
We note that the pairwise distance matrix is required for Meta-AAD and GAOD (for neighborhood retrieval). As such, both approaches would fail to work under large data volume due to the high space complexity (O(n2)). Taking the largest dataset Cover as an example (shown in Table.1), the pairwise distance matrix would consume 610 GB memory in theory, which would trigger the OutOf-Memory (OOM) problem in our experiment environment. Therefore, we only keep the top 50% and 20% samples for KDD99-SA and Cover, respectively, based on the anomaly scores produced by the base detector. Only these samples are involved in the feedback incorporation of Meta-AAD and GAOD.
A.6 THE COMPLETE RESULTS OF SAMPLE SELECTOR VALIDATION
We illustrated the sample selector validation results on all 8 datasets in Fig.8. Our sampling strategy outperforms other sampling methods on most datasets. Compared with the results of FIF and GAOD shown in Fig.5, we also found that our proposed method still achieved much better F1-Scores even using the top-selection strategy in the same manner. It confirms the effectiveness of our proposed model tuner on the other side.
A.7 THE COMPLETE RESULTS OF MODEL TUNER VALIDATION
We show the model tuner validation results on all eight datasets in Fig.9. From these figures, we confirm the conclusion in Sec. 4.6. The proxy model has captured the knowledge learned by the base detection method as they produced similar anomaly scores. As such, the transformed representation hi can be directly fed into the base detector.
A.8 EFFECTIVENESS OF PRE-TRUNCATION IN SAMPLE SELECTOR
In Sec. 3.1.2, we introduced the pre-truncation to improve the sampling efficiency. In this section, we aim to validate its effectiveness in the sample selector. Specifically, we adjusted α from 1% to 60%. We recorded the running time and its corresponding AUC of F1-Score Curve under different α values, which are shown in Fig.10. From the left figure of Fig. 10, we can draw a conclusion that the running time can be significantly reduced by more pre-truncation. For example, the running time can be saved in half if we adjust α from 50% to ∼6%. Moreover, from the right figure of Fig. 10, we can see that the AUC of F1-Score arises when α < 10% and then gradually drops when we keep increasing α. As we have discussed in Sec. 3.1.2, it manifests that either a too broad or a too narrow set of candidate samples leads to suboptimal feedback querying. Generally speaking, we set α around the estimated contamination ratio, such as 10%.
A.9 EXPLOARATION OF OVER-FITTING PROBLEM
In Sec.4.3, we found that the comparison methods performed much worse than LMADA. From the feedback incorporation perspective, it is caused by the overfitting to the few top-ranked samples (see Sec.1). To verify this point, we take GAOD as an example and gradually increase the number of querying samples in each feedback iteration to mitigate the overfitting problem. We rerun GAOD on three datasets (PageBlocks, Shuttle and KDD99-SA), where it did not perform well. According to the settings described in the original GAOD paper, the size of the data pool should be set to 2 × #outliers (Li et al., 2019). Therefore, we enlarged the data pool size spanning from 0.5 to 2 × #outliers by a stride of 0.5. From the results shown in Fig. 11, we see that GAOD can only achieve improvements in F1-Score with at least 0.5× #outliers (e.g., the number of queried samples reaches 168 per iteration in KDD99-SA dataset, which is far beyond our proposed approach with 20 per iteration). Therefore, it requires a significantly larger labeling effort.
A.10 QUERY NUMBER EXPLORATION
We conducted the comparison experiment under different query numbers per feedback iteration (1, 5, 10, 20) on KDD-SA dataset, which can be found in Fig.12. From the figure, we can see that LMADA can achieve a consistent performance improvement, even with only 1 sample per iteration. On the contrary, the F1-Scores of FIF/GAOD/Meta-AAD fail to increase because they only select the top-ranked samples for updating the model, ignoring the low-ranked anomaly samples, such as the “smurf” type (as we presented in Sec.2.1).
A.11 ADDITIONAL EXPERIMENT
We add the experimental results of the top-random query strategy in Fig.13, which represents a random selection from samples with high anomaly scores. From the results, we can conclude that our sampling method significantly outperforms the top-random on PageBlocks, Cardio, Cover, Mammography, KDD99-SA datasets and achieve similar performance on Annthyroid, KDD99-Http, and Shuttle datasets. Moreover, it is worth noting that the variance of the top-random strategy is much larger than that of ours. | 1. What is the focus and contribution of the paper on active anomaly detection?
2. What are the strengths of the proposed approach, particularly in its ability to work with various base anomaly detection models?
3. Do you have any concerns or questions regarding the necessity and design of the proxy model?
4. How did the authors choose the architecture of the proxy model, and were other models considered?
5. Are there any limitations or trade-offs in the proposed method, such as increased computational cost or requirements for diverse sampling strategies?
6. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposes a method for using any anomaly detection method in an Active Anomaly Detection (AAD) setting. The proposed method consists of three phases: 1) training of a proxy model to emulate the results of the base anomaly detection model(s), 2) training of a data transformation module using human feedback on the anomalous samples, and 3) applying the transformation layer in front of the base anomaly detectors in the data pipeline. The proposed method shows superior results in AAD across several different benchmark datasets. The paper also uses some ablation and comparison results to support the underlying insight beyond the method, namely that there needs to be a diversity-aware sampling strategy for presenting anomalous points to a human.
Strengths And Weaknesses
The paper has strong empirical results, is attacking a significant problem, and has a novel approach to making its method agnostic to the base AD algorithm(s). The performance increases seen by using the proposed method, especially with different types of base AD models, are very convincing of the soundness of the presented method and its underlying insight. I also especially appreciate the way the paper attacks the problem of making an agnostic method by attacking the data featurization as a way of improving the result; it very much embodies the fundamental tenets of data-centric data science.
There are a few weaknesses in (or questions about) the correctness of the proposed method and the clarity of the paper.
• Why does the method need a proxy model? If the proxy model is frozen at Phase 2 of training when the representation adjuster is trained, why not just take the user feedback and base AD model results and train the neural network adjuster directly? It's not clear to me why there needs to be a proxy model for the method when it seems like one could take the feedback loss and the consolidation loss directly from the base AD outputs to train the representation adjuster.
• Why is the proxy model a single-layer neural network with no bias term? Roughly speaking, adding depth to a neural network improves its ability to do non-linear transformations, so why not opt for a deeper network for the representation adjuster?
• How did you decide what the proxy model should be? Why is it a neural network and what is the architecture of that network? Were other models considered?
• Figure 6 does not really show the comparison of using a base model to using a base model + LMADA, which is the claim in section 4.4. It shows how different base model + LMADA models perform, which is also good, but there needs to be something more to show how adding LMADA for any base model improves that model.
Clarity, Quality, Novelty And Reproducibility
The novelty and quality of the paper are high. While the problem they are attacking is not novel, the creation of a model-agnostic method by transforming the data rather than the model is novel. The breadth and depth of the empirical results greatly benefit the quality of the proposed method. There are some questions that need to be addressed (see the previous section) to improve the clarity and, possibly, some aspects of the quality of the paper. |
ICLR | Title
Towards Lightweight, Model-Agnostic and Diversity-Aware Active Anomaly Detection
Abstract
Active Anomaly Discovery (AAD) is flourishing in the anomaly detection research area, which aims to incorporate analysts’ feedback into unsupervised anomaly detectors. However, existing AAD approaches usually prioritize the samples with the highest anomaly scores for user labeling, which hinders the exploration of anomalies that were initially ranked lower. Besides, most existing AAD approaches are specially tailored for a certain unsupervised detector, making it difficult to extend to other detection models. To tackle these problems, we propose a lightweight, model-agnostic and diversity-aware AAD method, named LMADA. In LMADA, we design a diversity-aware sample selector powered by Determinantal Point Process (DPP). It considers the diversity of samples in addition to their anomaly scores for feedback querying. Furthermore, we propose a model-agnostic tuner. It approximates diverse unsupervised detectors with a unified proxy model, based on which the feedback information is incorporated by a lightweight non-linear representation adjuster. Through extensive experiments on 8 public datasets, LMADA achieved 74% F1-Score improvement on average, outperforming other comparative AAD approaches. Besides, LMADA can also achieve significant performance boosting under any unsupervised detectors.
1 INTRODUCTION
Anomaly detection aims to detect the data samples that exhibit significantly different behaviors compared with the majority. It has been applied in various domains, such as fraud detection (John & Naaz, 2019), cyber intrusion detection (Sadaf & Sultana, 2020), medical diagnosis (Fernando et al., 2021), and incident detection (Wang et al., 2020). Numerous unsupervised anomaly detectors have been proposed (Zhao et al., 2019; Boukerche et al., 2020; Wang et al., 2019). However, practitioners are usually unsatisfied with their detection accuracy (Das et al., 2016), because there is usually a discrepancy between the detected outliers and the actual anomalies of interest to users (Das et al., 2017; Zha et al., 2020; Siddiqui et al., 2018). To mitigate this problem, Active Anomaly Discovery (AAD) (Das et al., 2016), is proposed to incorporate analyst’s feedback into unsupervised detectors so that the detection output better matches the actual anomalies.
The general workflow of Active Anomaly Discovery is shown in Fig.1. In the beginning, a base unsupervised anomaly detector is initially trained. After that, a small number of samples are selected to present to analysts for querying feedback. The labeled samples are then utilized to update the detector for feedback information incorporation. Based on the updated detection model, a new set of samples are recommended for the next feedback iteration. Finally, the tuned detection model is ready to be applied after multiple feedback iterations, until the labeling budget is exhausted.
Despite the progress of existing AAD methods (Das et al., 2017; Zha et al., 2020; Siddiqui et al., 2018; Keller et al., 2012; Zhang et al., 2019; Li et al., 2019; Das et al., 2016), some intrinsic limitations of these approaches still pose great barriers to their real-world applications. Firstly, most AAD methods adopt the top-selection strategy for the feedback querying (Das et al., 2017; Zha et al., 2020; Siddiqui et al., 2018; Li et al., 2019), i.e., the samples with the highest anomaly scores are always prioritized for user labeling. However, it hinders exploring the actual anomalies that are not initially scored highly by the base detector. As such, these AAD approaches are
∗Qingwei Lin is the corresponding author.
highly susceptible to over-fitting to the top-ranked samples, resulting in a suboptimal recall with respect to all anomalies. We shall demonstrate this with a real example in Sec. 2.1. Secondly, most existing AAD approaches (Das et al., 2017; 2016; Siddiqui et al., 2018) are tightly tailored for a certain kind of detection model, making it difficult to extend to other unsupervised detectors.
They need to modify the internal structure of a particular type of unsupervised detector, endowing them with the ability of feedback integration. Therefore, it is impractical and ad-hoc to re-design them each time facing such a variety of unsupervised detection models. Recent AAD methods (Zha et al., 2020; Li et al., 2019)
attempted to generalize to arbitrary detectors. However, they can barely scale because their mode size grows with the number of samples.
To tackle these problems in AAD, we propose a Lightweight, Model-Agnostic and Diversity-Aware active anomaly detection approach, named LMADA. It consists of two components, i.e, sample selector (for sample selection) and model tuner (for feedback incorporation). In the sample selector, we take the anomaly scores as well as the diversity of samples into account, instead of solely picking up the most anomalous ones for feedback querying. Specifically, we fuse anomaly scores and the feedback repulsion scores into a diversity-aware sampling technology powered by Determinantal Point Processes (DPP) (Chen et al., 2018; Kulesza et al., 2012). In the model tuner, we first leverage a neural network as the proxy model to approximate an arbitrary unsupervised detector. After that, we fix the weights of the proxy model and learn a representation adjuster on top of it. The representation adjuster is responsible for transforming the input feature vector to fit the feedback-labeled samples. Finally, each sample to be detected is transformed by the representation adjuster and then fed back to the base detector to estimate its anomaly score. In this way, the model tuner shields the details of different unsupervised detectors and achieves lightweight feedback incorporation, only via a non-linear representation transformation.
We conducted extensive experiments on 8 public AD datasets to evaluate the effectiveness of our proposed method. The experimental results show that LMADA can achieve 74% F1-Score improvement on average, outperforming other comparative AAD approaches under the same feedback sample budget. In addition, we also validated that LMADA works well under various unsupervised anomaly detectors.
2 RELATED WORK AND MOTIVATION
In this section, we will give a brief introduction to the existing AAD work and analyze their limitations from two aspects: (1) sample selection and (2) feedback incorporation.
2.1 SAMPLE SELECTION
Most AAD approaches (Siddiqui et al., 2018; Das et al., 2017; Zha et al., 2020; Li et al., 2019; Das et al., 2016) adopt the top-selection strategy. The anomalous samples, that are not ranked on the top initially by the base detector, would have little chance to be selected for feedback, and therefore can hardly be recalled subsequently. We show a real example using KDD-99 SA1, which is a famous intrusion detection dataset. The dataset contains one normal class (96.7%) and 11 anomalous classes (3.3%) of various intrusion types. We applied the Isolation Forest (Liu et al., 2012) detector (a widely accepted one) to this dataset and found that the recall was around 0.28. We show the anomaly score distribution for the normal samples and three major intrusion types, respectively, in Fig. 2. Only the samples of two intrusion types, i.e., “neptune” and “satan”, are assigned high anomaly scores (0.60 ∼ 0.70). However, the samples of another major intrusion type “smurf” (accounts for 71.27% of all anomalous samples) are assigned relatively low anomaly scores (0.50 ∼ 0.55), which is even below the anomaly scores of many normal samples (4168 normal samples vs. 15 “smurf” anomalies were assigned anomaly scores over 0.55). Under this circumstance, selecting the top samples only for feedback can hardly improve the recall for the “smurf” type. In LMADA, we consider both anomaly scores as well as the diversity of samples during the sample selection. In this way, samples
1https://archive.ics.uci.edu/ml/machine-learning-databases/kddcup99-mld/kddcup.data.gz
not initially ranked on the top, like the “smurf” anomalies in our example, can have an opportunity to present to analysts.
2.2 FEEDBACK INCORPORATION
How to incorporate feedback information is another focus of AAD. Das et al.(Das et al., 2017) added a set of adjustable weights to the random projections generated by LODA detector (Pevnỳ, 2016), by which the feedback can be incorporated. They also modified Isolation Forest (Liu et al., 2012) by adding weights to the tree paths, re-weighting the isolation score based on the feedback (Das et al., 2016). Siddiqui et al.(Siddiqui et al., 2018) extended the re-weighting strategy to the Generalized Linear Anomaly Detectors (GLAD) with the help of online convex optimization (Hazan et al., 2016). iRRCF-Active (Wang et al., 2020) also borrowed the above similar idea into iRRCF detector (Guha et al., 2016). In summary, the above methods require tailoring the weights specific to the certain model structure of different unsupervised detectors and then adjusting the weights with feedback-labeled samples by gradient descent. However, it is impractical for such a diverse range of unsupervised detectors as the modification is sophisticated and case-by-case. In LMADA, we propose a model-agnostic method to incorporate feedback information, regardless of the type of unsupervised detectors.
We also note that some AAD approaches have been proposed and attempted to support arbitrary base detectors. Meta-AAD (Zha et al., 2020) first extracts a set of transferable features based on k-neighbors to labeled instances and feeds them into a pre-trained meta-policy model for detection. GAOD (Li et al., 2019) leverages label spreading (Zhou et al., 2003), a graph-based semi-supervised model, to iteratively spread label information to neighbors. In summary, both AAD methods leverage neighborhoods of labeled instances to exploit feedback information but require persisting the entire dataset for neighboring sample retrieval. Therefore, the final tuned detection model would become increasingly heavier and heavier. In this paper, the feedback incorporation of LMADA is achieved by only a non-linear transformation, which is lightweight enough for real-world application.
3 APPROACH
In this section, we will elaborate on the details about LMADA. Following the general AAD workflow shown in Fig.1, LMADA consists of two components, i.e., sample selector and model tuner. In the sample selector, we consider the diversity in addition to the anomaly scores when recommending valuable samples for labeling. In the model tuner, we proposed a model-agnostic strategy to incorporate feedback information for arbitrary unsupervised detectors. It is achieved in a lightweight manner, only relying on a simple non-linear transformation.
3.1 SAMPLE SELECTOR
As discussed in Sec. 2.1, sample selection of AAD should consider the diversity of the selected samples in addition to the anomaly scores. The diversity here is not in terms of anomaly scores but in the distribution of the samples. In summary, our attempt is to select a subset of samples with high anomaly scores, and meanwhile, are dissimilar from each other. We use the example shown in Fig. 3 to illustrate this idea. There are two types of anomalies A and B that stray from the majority of samples. The anomaly scores (based on the Isolation Forest) are indicated by the colors. The
deeper the color, the higher the anomaly score. The selected samples are indicated by the blue cross markers. The number of selected samples is fixed as 20. Type-B anomalies are assigned relatively lower anomaly scores compared with type-A because they are more adjacent to the normal samples.
If we use the top-selection strategy, the selected samples would mostly come from type-A (as shown in the left subfigure of Fig.3), which may not cover the other types of anomalies. Therefore, the feedback would not help the AAD to recall more anomalies, e.g., type-B in this example. The desired sample selection is shown in the right subfigure of Fig.3, where the selector achieves a good coverage for all samples with relatively high anomaly scores. In this way, we can enhance the anomaly scores of all anomaly types, instead of only those originally ranked high by the base detector.
Inspired by (Chen et al., 2018), we leverage a widely-adopted diversity sampling method, i.e., Determinantal Point Processes (DPP) (Kulesza et al., 2012), to achieve the above sampling target. We first introduce DPP in Sec. 3.1.1, and then describe how we balance the dual objectives, i.e., anomaly score and diversity, in Sec. 3.1.2.
3.1.1 DETERMINANTAL POINT PROCESSES (DPP)
The Determinantal Point Process (DPP) was originally introduced from fermion systems in thermal equilibrium (Macchi, 1975; Chen et al., 2018). Recently, it has been successfully applied to various machine learning tasks, e.g., image search (Kulesza & Taskar, 2011a), document summarization (Kulesza & Taskar, 2011b) and recommendation systems (Gillenwater et al., 2014). Given a dataset D = {s1, s2, ..., sn}, DPP aims to select a subset C from D. Specifically, DPP constructs a real positive semidefinite (PSD) kernel matrix L ∈ Rn×n derived from D. For each subset C ⊆ D, the probability of selecting C from D, denoted as P (C), is proportional to det(LC), where det(LC) is the determinantal value of the principal minor LC . The objective of DPP is to derive C∗ which maximizes the value of det(LC), shown in Eq.1. As an example, to achieve maximum diversity, the kernel matrix could be constructed as the pairwise similarity matrix (Kulesza et al., 2012).
C∗ = argmaxC⊆Ddet(LC) (1)
How to approximately solve this NP-hard problem (Ko et al., 1995) has been well studied in (Gillenwater et al., 2012; Han et al., 2017; Li et al., 2016; Chen et al., 2018) and we adopt the greedy algorithm proposed in (Chen et al., 2018) in our paper. We will introduce how to construct a specially tailored kernel matrix L for AAD in the next section.
3.1.2 KERNEL MATRIX CONSTRUCTION
In LMADA, we construct a kernel matrix L, whose entries can be formally written as Eq.2,
Lij = ⟨airisi, ajrjsj⟩ = aiajrirj ⟨si, sj⟩ (2)
where ai denotes the anomaly score uniformly re-scaled in the range of [0, 1]. It is used to motivate DPP to select samples with high anomaly scores. Meanwhile, we need to select diverse samples within and across feedback iterations. In each feedback iteration, the inner product ⟨si, sj⟩ measures the pairwise similarity of all candidate samples, based on which DPP prefers dissimilar samples (Kulesza et al., 2012). As there are multiple feedback iterations, we expect the samples selected in the current iteration are also different from those sampled in previous iterations. To achieve so, we maintain a data pool P preserving the selected samples from the previous feedback iterations. The minimum distance between a candidate sample si and the selected samples cached in P , is defined as the feedback repulsion score ri, as shown in Eq.3.
ri = min({1−⟨si, sk⟩ |∀sk ∈ P}) (3)
From Eq.2, we can conclude that det(LC) is proportional to aiajrirj and is inversely proportional to ⟨si, sj⟩ among the selected samples in C. In this way, it induces DPP to select more anomalous
(i.e., higher aiaj) data points that are not adjacent to the previously selected examples (i.e., higher rirj). Meanwhile, the data points are also distinguish enough from each other (i.e., lower ⟨si, sj⟩). The qualitative analysis can be referred to Appendix Sec.A.1.
Theoretically, the complexity of constructing L is O ( n2 ) , which is expensive for a large dataset. However, anomalous samples generally account for a small percentage of the whole dataset compared with the normal class (Zhao et al., 2019; Boukerche et al., 2020). For the instance in KDD99SA dataset introduced in Sec.2.1, only 3.3% of samples belong to anomalies. It is unnecessary to regard all samples as candidates for the sample selector. Consequently, we construct the kernel matrix with only the pre-truncated top α% samples ranked by their anomaly scores. In general, if α is small enough (e.g., < 3%), the selected samples would be those with the highest anomaly scores, i.e., similar to the top-selection. On the other hand, if α is large (e.g., > 30%), the selected samples would become too diverse to retrieve samples worthwhile for feedback. We will evaluate different α settings in Appendix Sec.A.8.
3.2 MODEL TUNER
After labeling the examples recommended by the sample selector, the model tuner focuses on how to incorporate newly labeled data points. The model tuner should be agnostic to the base unsupervised detectors. In other words, any unsupervised detection model can be easily integrated into our framework. To achieve this goal, we propose a three-phases model tuner in LMADA, as shown in Fig. 4. Firstly, we set up a neural network as the proxy model (Coleman et al., 2019) to mimic the behaviors of diverse base detectors. After that, a representation adjuster is added in front of the frozen proxy model to get trained based on the labeled samples. Finally, the tuned representation adjuster is used to transform the original samples into new representation vectors, which will be fed back to the base detector for re-scoring. The feedback continues for multiple iterations until the sampling budget is exhausted. The tuned representation adjuster can be applied as illustrated in the Phase-3 of Fig.4. Given a testing sample si, we first transform it into a new representation vector hi via the representation adjuster Ω (si). Then we directly feed hi into the base anomaly detector f and get the final detection results f (hi). In this way, LMADA achieves feedback incorporation in a lightweight manner, only with a non-linear representation transformation.
3.2.1 PROXY MODEL APPROXIMATION
As introduced in Sec. 2.2, unsupervised detectors of various types pose a great challenge to modelagnostic AAD. There are significant differences between the model structures of different unsupervised detectors. Most existing AAD work (Siddiqui et al., 2018; Das et al., 2017; 2016; Wang et al., 2020) needs to specifically modify the internal structure of unsupervised detectors.
To tackle this problem, we utilize a deep neural network as the proxy model to approximate the behaviors of diverse unsupervised detectors. In this way, we can turn unsupervised detectors into gradient-optimizable neural networks, which facilitate the subsequent representation adjuster tuning (more details presented in Sec.3.2.2). As shown in Phase-1 of Fig. 4, we use the normalized anomaly scores f(si) generated by the base detector as the pseudo-labels and set up a neural network Φ in parallel to fit them. The proxy model is composed of one input layer and multiple hidden layers followed by an output layer activated by the sigmoid function. The Mean-
Squared-Error (MSE) is adopted as the loss function during proxy model training, as shown in Lproxy = ∑b i=1 (Φ (si)− f (si)) 2, where b denotes the batch size.
After the proxy model training, the anomalous patterns that are captured by the base detectors have been learned by the proxy model, i.e., the proxy anomaly scores Φ (si) ≈ f (si). The key point here is that the internal structures of different unsupervised detectors do not need to be considered in this training process.
3.2.2 REPRESENTATION ADJUSTER TUNING
In Phase-2, we devise a representation adjuster Ω in front of the proxy model to incorporate the feedback information. The representation adjuster is a simple non-linear transformation layer, which takes the original sample vector si as the input and transforms it into a new feature space but with the same dimensions, i.e., hi = Ω(si) = sigmoid (Wsi),where hi ∈ Rd and si ∈ Rd. As shown in the middle of Fig.4, the transformed hi will be fed into the trained proxy model Φ and generate the proxy anomaly score Φ (hi). Based on that, W will be updated under the loss function in Eq.4. The representation adjuster can be trained by a gradient descent optimizer because the subsequent proxy model (as shown in Fig. 4) is also a neural network. The parameters of the proxy model are frozen during the representation adjuster tuning phase.
Ladjuster = Lfeedback + Lconsolidation + η (4)
Ladjuster is composed of three components, i.e., feedback loss, consolidation loss and a regularization item η. Lfeedback is used to fit the labeled samples in the data pool P , as shown in Eq.5, where yi represents the feedback label (+1 for the anomalous class and -1 for the normal class) for the sample si.
Lfeedback = − b∑
i=1
yi ∗ log (Φ (hi)) ,∀si ∈ P (5)
Training with only a few labeled samples would make the representation adjuster biased toward the feedback labels but ignore the patterns already learned from the base detector. So we design another component inspired by (Li & Hoiem, 2017), i.e., Lconsolidation , that serves for consolidating the knowledge of the base unsupervised detector, as shown in Eq.6. h̃i denotes the transformed sample representation in the last feedback iteration (h̃i = si in the first feedback iteration). It forces the proxy anomaly scores Φ (hi) of the remaining unlabeled samples to be stabilized around the original anomlay scores f ( h̃i ) in the newly transformed feature space. We note that Lconsolidation
is not conducive to fitting Lfeedback as the former tends to remain the original representation. To achieve a trade-off between them, we assign a weight for the consolidation loss of each sample. Intuitively, if an unlabeled sample si is similar to the labeled samples in the feedback data pool P , its consolidation loss should have a lower weight, reducing the constraints for fitting Lfeedback . On the contrary, those unlabeled samples, which are unlike the data points in P , should be assigned a higher weight to enhance the influence of the consolidation loss. This intuition is fully aligned with the feedback repulsion score ri introduced in Sec.3.1.2 and we thus use it as the weight of consolidation loss.
Lconsolidation = b∑
i=1
ri ∗ ( Φ (hi)− f ( h̃i ))2 ,∀si /∈ P (6)
The last component is the penalty for feature space transformation because the extremely dramatic change to the original sample vectors is undesired. To achieve so, we set η as ∑b i=1 ||hi − si||2. More training details for the representation adjuster can be found in Appendix Sec.A.2.
4 EXPERIMENT
4.1 DATASETS AND SETTINGS
We evaluated our proposed method on 8 public datasets, including PageBlocks, Annthyroid, Cardio, Cover, KDD99-Http, Mammography, KDD99-SA, Shuttle, which are widely used by existing AAD
approaches (Siddiqui et al., 2018; Zha et al., 2020; Li et al., 2019; Das et al., 2017; 2019). The details of these datasets can be found in Appendix Sec. A.3. We run 5 feedback iterations and query 20 samples in each iteration. Same as the existing work, we used simulation instead of real user feedback since all the ground truth is known for these public datasets. The experimental environment and the parameters setting can be found in Appendix Sec. A.4 and Sec. A.5, respectively.
4.2 COMPARISON METHODS AND METRICS
We compared LMADA with three state-of-the-art AAD methods, i.e., FIF (Siddiqui et al., 2018), Meta-AAD (Zha et al., 2020), and GAOD (Li et al., 2019). FIF adds a set of weights to the tree branches of the Isolation Forest detector and tunes them via online convex optimization with feedback information. GAOD utilizes the semi-supervised method (label spreading Zhou et al. (2003)) to consume user feedback. Both of the above approaches adopt the top-selection strategy. Meta-AAD extracts a set of transferable features for a pre-trained meta-policy detection model, considering both long-term and short-term benefits in querying feedback.
We use F1-Score Curve to evaluate the effectiveness of different AAD methods. Specifically, we calculate F1-Score on the entire dataset after finishing an iteration of feedback. Besides, we also calculate the Area-Under-Curve (AUC) (Ling et al., 2003) of the F1-Score Curve.
4.3 COMPARISON EXPERIMENT RESULTS
We compared our proposed method with three state-of-the-art AAD approaches and the results are illustrated in Fig. 5. For fairness, we used Isolation Forest as the base detector because it was adopted by all the comparison methods (Zha et al., 2020; Siddiqui et al., 2018; Li et al., 2019). To ensure reproducibility, we repeated our experiments 10 times on each dataset and plotted the average F1Score and the standard error bar (Altman & Bland, 2005). The AUC value of each F1-Score Curve is shown in the legend.
From the results, we can confirm that LMADA performs better than other AAD methods. With 20 feedback samples per iteration, LMADA achieved consistently higher F1-Score on most datasets. Especially on KDD99-SA, Cover, and Cardio datasets, LMADA boosted the F1-Score of the base detector by an average of 144% to 0.80+ after 5 feedback iterations. For PageBlocks, Annthyroid, and Mammography datasets, LMADA also increased the F1-Score by 60% on average, significantly outperforming other AAD models. As for the KDD99-Http and Shuttle dataset, we can see that the initial performance of the base detector has reached a relatively high level. Under this circumstance, LMADA also can hold a high detection accuracy, exhibiting its robustness.
Among the comparison methods, Meta-AAD performed much better than the other two because it utilizes reinforcement learning to learn a meta-policy for feedback querying, rather than simply picking up the samples with the highest anomaly scores. However, the diversity of samples is not taken into account explicitly, resulting in relatively lower performance compared with LMADA (e.g. 0.29 AUC of Meta-AAD vs. 0.87 AUC of LMADA in KDD99-SA dataset). FIF and GAOD even had difficulty preserving the upward trend of their F1-Score curves, although more feedback samples were added. As we discussed in Sec.2.1, the top-selection strategy of both methods hinders the exploration of the lower-ranked anomalous samples. Moreover, their detectors were tuned to over-fit the scarce feedback-labeled samples, leading to a decreasing recall. We have verified this in Appendix Sec. A.9.
4.4 MODEL-AGNOSTIC EVALUATION
We target to propose a model-agnostic AAD approach, which can be easily extended to arbitrary unsupervised detectors. As such, we evaluated the effectiveness of LMADA under five different but commonly-used unsupervised detectors, including AutoEncoder (Vincent et al., 2010), PCA (Shyu et al., 2003), OCSVM (Schölkopf et al., 2001), LODA (Pevnỳ, 2016; Das et al., 2016), and IF. The experimental settings are the same as that in Sec.4.3 and the results are shown in Fig. 6.
From these figures, we can conclude that LMADA works well on different unsupervised detectors. It can consistently improve the F1-Score on all eight datasets whatever the base detector is adopted. More than that, we also found that the performance gains achieved by LMADA vary with different unsupervised detectors. Taking the KDD99-Http dataset as an example, we can see that LODA performs much worse than the other base detectors at the beginning (F1-Score 0.02 compared to ∼0.82 of the other detectors). Even so, LMADA was also able to improve the performance of LODA from 0.02 to 0.96 after 5 iterations. We also noted that the variance of its results is significantly larger than the others. The reason is that LODA is inaccurate and unstable on KDD99-Http dataset, making it difficult to provide effective information for the sample selector and the model tuner. These experiment results confirm that the initial performance of base detectors has a great influence to AAD approaches.
4.5 SAMPLE SELECTOR VALIDATION
In this section, we validated the effectiveness of our proposed sample selector in LMADA. As we discussed in Sec. 2.1, diversity plays a critical role in AAD. In order to verify this point, we conducted an ablation study on the KDD99-SA dataset. In this dataset, 11 anomalous classes and the normal class are well annotated separately so that we can study how samples would be selected by different sampling strategies. We compared our proposed sampling method with the commonly-used
top-selection strategy (Das et al., 2017; 2016; Siddiqui et al., 2018), and the stratified sampling described in (Guha et al., 2016) (i.e., divide samples into g groups based on their anomaly scores and then select examples randomly from each group). The model tuner is fixed. The selected anomalous classes under these settings and their corresponding improved F1-Scores are shown in Fig. 7(a) and Fig. 7(b), respectively.
From Fig.7(a), we can see that the sample selector of LMADA is able to cover more anomaly classes, compared with the other two sampling strategies. Furthermore, we also confirm the necessity of the diversity-aware selection from Fig.7(b) since our sample selector achieved much higher F1-Scores than those under the top-selection or the stratified sampling methods. For example, in the first feedback iteration, our proposed sample selector chose “smurf” samples (shown in blue color) for feedback, which were missed by the other two. As we stated in Sec.2.1, “smurf” samples were not assigned high anomaly scores by the base detector (IF) but they actually account for 71.27% of all anomalies. Therefore, we can see that F1-Score can be significantly improved from 0.28 to 0.94 with labeled “smurf” anomalies, while the other two strategies failed to achieve this high F1-Score. The complete results on all datasets can be found in Appendix Sec. A.6.
4.6 MODEL TUNER VALIDATION
In this section, we will present the effectiveness of our proposed model tuner. As introduced in Sec.3.2, the transformed representations hi are trained based on the proxy model but will be fed back to the base unsupervised detector to get the final anomaly scores. We aim to study how large the difference between the anomaly scores generated by the base detector f (hi) and the proxy model Φ (hi), respectively. We also conducted this ablation experiment on the KDD99-SA dataset and the results are exhibited in Fig. 7(c).
This figure shows that there is only a narrow gap in F1-Scores between the proxy model (green line) and the base unsupervised detector (red line). It manifests that the proxy model has captured the knowledge learned by the base detection method as they produced similar anomaly scores. As such, the transformed representations hi trained via the proxy model can be smoothly transferred to the base unsupervised detector. The complete experimental results on all datasets can be referred to Appendix Sec. A.7.
5 CONCLUSION
In this paper, we propose LMADA, a lightweight, model-agnostic and diversity-aware active anomaly detection method. In the sample selector of LMADA, we take the anomaly scores as well as the diversity of samples into account, unlike most existing AAD work that solely picks the most anomalous ones for feedback querying. In the model tuner of LMADA, we propose a model-agnostic strategy to incorporate feedback information, regardless of the type of unsupervised detector. It can be achieved by a lightweight non-linear transformation. Through the extensive evaluation on 8 public AD datasets, we show that LMADA can achieve 74% F1-Score improvement on average, significantly outperforming other comparative AAD approaches.
A APPENDIX
A.1 THE QUALITATIVE ANALYSIS OF EXTENDED DPP IN SAMPLE SELECTOR
The kernel matrix L is shown as Eq.2. As introduced in Sec.3.1.1, we aim to select a subset C with highest det (LC). The principal minor LC is as follows. a21r 2 1 ⟨s1, s1⟩ · · · a1ajr1rj ⟨s1, sj⟩ · · · a1a|C|r1r|C| 〈 s1, s|C| 〉 a2a1r2r1 ⟨s2, s1⟩ · · · a2ajr2rj ⟨s2, sj⟩ · · · a2a|C|r2r|C| 〈 s2, s|C| 〉 ... . . . ... . . . ... aia1rir1 ⟨si, s1⟩ · · · aiajrirj ⟨si, sj⟩ · · · aia|C|rir|C| 〈 si, s|C| 〉 ... . . . ... . . . ...
a|C|a1r|C|r1 ⟨sc, s1⟩ · · · a|C|ajr|C|rj 〈 s|C|, sj 〉 · · · a2|C|r 2 |C| 〈 s|C|, s|C| 〉
(7)
The det (LC) can be calculated in Eq 8. det (LC) = ∑ (−1)τ(p1,p2,...p|C|) L1p1L2p2 · · ·L|C|p|C| (8)
where p1, p2, ...p|C| denote all permutations of {1, 2, . . . |C|}, and τ ( p1, p2, . . . p|C| ) represents the reverse order number of p1, p2, ...p|C|. According to Eq.2, det (LC) can be further expanded as Eq. 9
det (LC) = |C|∏ i=1 a2i r 2 i ∑ (−1)τ(p1,p2,...p|C|) ⟨s1, sp1⟩ ⟨s2, sp2⟩ · · · 〈 s|C|, sp|C| 〉 (9)
= |C|∏ i=1 a2i r 2 i · ∣∣∣det([s1, s2, ...s|C|]⊤ [s1, s2, ...s|C|])∣∣∣ (10) =
|C|∏ i=1 a2i r 2 i · ( s1 ⊗ s2 ⊗ ...⊗ s|C| )2 = |C|∏ i=1 a2i r 2 i · V 2 (11)
|C|∏ i=1 a2i r 2 i is the common factor extracted from det (LC). As such, we can conclude that det (LC) is proportional to ai and ri, inducing DPP to select samples that have high anomaly scores and are different from those have already been selected in the data pool P .
The second term, ∑ (−1)τ(p1,p2,...p|C|) ⟨s1, sp1⟩ ⟨s2, sp2⟩ · · · 〈 s|C|, sp|C| 〉 , can be further rewrote
as the exterior product form ( s1 ⊗ s2 ⊗ ...⊗ s|C| )2 shown in Eq.11. According to the definition of exterior product (Browne, 2012), it geometrically represents the volume V of the parallel polyhedron spanned by vectors {s1, s2, ...s|C|}. Consequently, the more dissimilar they are, the larger the volume V of the spanned polyhedron is, the larger det (LC) is.
A.2 LABELED SAMPLES OVERSAMPLING
In the model tuner, we use the labeled samples to train the representation adjuster. Nevertheless, compared to the unlabeled samples, the feedback-labeled samples only account for a tiny percentage of the overall dataset (e.g., 20 samples per iteration vs. 286048 samples in total of the Cover dataset). Therefore, we need to over-sample the labeled samples in each training batch to improve the utilization of such a few feedback samples, so that we can fully exploit the feedback information and accelerate the loss convergence. Half of each training batch are labeled samples, which are repeatedly drawn from the data pool P , and the other half are unlabeled samples, which are randomly sampled from the all unlabeled samples.
A.3 DATASETS INFORMATION
We used eight public datasets for the evaluation. PageBlocks, Annthyroid, Cardio, Cover, Mammography, Shuttle are available in ODDS 2. KDD99-Http and KDD99-SA are available in UCI Machine
2http://odds.cs.stonybrook.edu/
Datasets Samples Dimension Anomaly Number Anomaly Rate
Learning Repository3. PageBlocks can be referred to ADBench 4. The detailed information of these datasets is shown in Table.1. The number of samples ranges from 1.8K to 286K and the anomaly rate is spanning from 0.96% to 9.61%.
A.4 EXPERIMENT ENVIRONMENT
We built LMADA based on PyTorch 1.12.0 (Paszke et al., 2019) and used base unsupervised anomaly detectors implemented in PyOD 1.0.3 (Zhao et al., 2019). In our experiments, we set up a Virtual Machine (VM) with 64 Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz processors and 256GB RAM. The operating system is Ubuntu-20.04. In the VM, we had an NVIDIA Tesla M40 GPU with CUDA 11.4 for deep learning model training.
A.5 EXPERIMENT SETTING DETAILS
LMADA: For the sample selector of LMADA, we set the pre-truncation rate α = 10%. We introduce two hyper-parameters λ and γ to adjust the preference of anomaly score and diversity (Lij = (aiaj) λ (rirj ⟨si, sj⟩)γ). In the experiments, we set λ = 1 and γ = 1. In the model tuner, we utilized the Adam optimizer (Kingma & Ba, 2014) and set the epoch number to 10, the learning rate to 0.01, and the batch size to 512, for both the proxy model approximation phase and the representation adjuster tuning phase. The size of the proxy model hidden layer is set to 64. Specifically for SA dataset, we performed dimension reduction (Carreira-Perpinán, 1997) on it because it is characterized by its high feature dimensions and sparsity.
Meta-AAD: We used the source code available in the link provided by the original paper5. We utilized 12 datasets (including toy, yeast, glass, ionosphere, lympho, pima, thyroid, vertebral, vowels, wbc, wine, yeast) for meta-policy training in our experiment. All the datasets are available in the released code repository 6. After that, we directly applied the trained meta-policy to the targeted 8 public datasets. We borrowed the the default settings from the original paper in our experiments: rollout steps T = 128, entropy coefficient c2 = 0.01, learning rate lr = 2.5× 10−4, value function coefficient c1 = 0.5, λ = 0.95, clip range ϵ = 0.2, balance parameter γ = 0.6.
FIF: We used the source code released in the link provided by the original paper 7. We chose the Log-Likelihood loss function for FIF in the experiment. We set the type of regularizer w = 2 and the learning rate a = 1.
GAOD: We implemented GAOD according to Li et al. (2019) by ourselves because lacking the released source code. We set the number of nearest neighbors k = 30 and the learning rate of label spreading α = 0.995. The standard deviation of Gaussian function σ is set to half of the 95-percentile of k-th nearest neighbor distances.
3https://archive.ics.uci.edu/ml/machine-learning-databases/kddcup99-mld/kddcup.data.gz 4https://github.com/Minqi824/ADBench 5https://github.com/daochenzha/Meta-AAD 6https://github.com/daochenzha/Meta-AAD/tree/master/data 7https://github.com/siddiqmd/FeedbackIsolationForest
We note that the pairwise distance matrix is required for Meta-AAD and GAOD (for neighborhood retrieval). As such, both approaches would fail to work under large data volume due to the high space complexity (O(n2)). Taking the largest dataset Cover as an example (shown in Table.1), the pairwise distance matrix would consume 610 GB memory in theory, which would trigger the OutOf-Memory (OOM) problem in our experiment environment. Therefore, we only keep the top 50% and 20% samples for KDD99-SA and Cover, respectively, based on the anomaly scores produced by the base detector. Only these samples are involved in the feedback incorporation of Meta-AAD and GAOD.
A.6 THE COMPLETE RESULTS OF SAMPLE SELECTOR VALIDATION
We illustrated the sample selector validation results on all 8 datasets in Fig.8. Our sampling strategy outperforms other sampling methods on most datasets. Compared with the results of FIF and GAOD shown in Fig.5, we also found that our proposed method still achieved much better F1-Scores even using the top-selection strategy in the same manner. It confirms the effectiveness of our proposed model tuner on the other side.
A.7 THE COMPLETE RESULTS OF MODEL TUNER VALIDATION
We show the model tuner validation results on all eight datasets in Fig.9. From these figures, we confirm the conclusion in Sec. 4.6. The proxy model has captured the knowledge learned by the base detection method as they produced similar anomaly scores. As such, the transformed representation hi can be directly fed into the base detector.
A.8 EFFECTIVENESS OF PRE-TRUNCATION IN SAMPLE SELECTOR
In Sec. 3.1.2, we introduced the pre-truncation to improve the sampling efficiency. In this section, we aim to validate its effectiveness in the sample selector. Specifically, we adjusted α from 1% to 60%. We recorded the running time and its corresponding AUC of F1-Score Curve under different α values, which are shown in Fig.10. From the left figure of Fig. 10, we can draw a conclusion that the running time can be significantly reduced by more pre-truncation. For example, the running time can be saved in half if we adjust α from 50% to ∼6%. Moreover, from the right figure of Fig. 10, we can see that the AUC of F1-Score arises when α < 10% and then gradually drops when we keep increasing α. As we have discussed in Sec. 3.1.2, it manifests that either a too broad or a too narrow set of candidate samples leads to suboptimal feedback querying. Generally speaking, we set α around the estimated contamination ratio, such as 10%.
A.9 EXPLOARATION OF OVER-FITTING PROBLEM
In Sec.4.3, we found that the comparison methods performed much worse than LMADA. From the feedback incorporation perspective, it is caused by the overfitting to the few top-ranked samples (see Sec.1). To verify this point, we take GAOD as an example and gradually increase the number of querying samples in each feedback iteration to mitigate the overfitting problem. We rerun GAOD on three datasets (PageBlocks, Shuttle and KDD99-SA), where it did not perform well. According to the settings described in the original GAOD paper, the size of the data pool should be set to 2 × #outliers (Li et al., 2019). Therefore, we enlarged the data pool size spanning from 0.5 to 2 × #outliers by a stride of 0.5. From the results shown in Fig. 11, we see that GAOD can only achieve improvements in F1-Score with at least 0.5× #outliers (e.g., the number of queried samples reaches 168 per iteration in KDD99-SA dataset, which is far beyond our proposed approach with 20 per iteration). Therefore, it requires a significantly larger labeling effort.
A.10 QUERY NUMBER EXPLORATION
We conducted the comparison experiment under different query numbers per feedback iteration (1, 5, 10, 20) on KDD-SA dataset, which can be found in Fig.12. From the figure, we can see that LMADA can achieve a consistent performance improvement, even with only 1 sample per iteration. On the contrary, the F1-Scores of FIF/GAOD/Meta-AAD fail to increase because they only select the top-ranked samples for updating the model, ignoring the low-ranked anomaly samples, such as the “smurf” type (as we presented in Sec.2.1).
A.11 ADDITIONAL EXPERIMENT
We add the experimental results of the top-random query strategy in Fig.13, which represents a random selection from samples with high anomaly scores. From the results, we can conclude that our sampling method significantly outperforms the top-random on PageBlocks, Cardio, Cover, Mammography, KDD99-SA datasets and achieve similar performance on Annthyroid, KDD99-Http, and Shuttle datasets. Moreover, it is worth noting that the variance of the top-random strategy is much larger than that of ours. | 1. What is the focus and contribution of the paper regarding diversity-aware query strategies?
2. What are the strengths and weaknesses of the proposed approach, particularly in its experimental design and comparison to prior works?
3. Do you have any concerns or suggestions regarding the paper's methodology, such as the choice of dataset information, sampling strategy, and metric usage?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper presents a diversity-aware query strategy for active anomaly detection. The proposed query strategy is based on DPP. A positive semi-definite pairwise similarity matrix between highest ranked anomalous instances is first constructed, and then a subset of instances is computed by maximizing its principal minor. The subset of instances selected in this manner has been shown to maximize diversity in previous literature.
Strengths And Weaknesses
The paper targets an important problem in anomaly detection.
Section 4.1 Datasets and Settings: The dataset information is not complete. For example, it is not clear which categories in each dataset were treated as anomaly and normal. The Table 1 in appendix has different number of instances from cited previous works (Das et al., Siddiqui et al.).
Section 4.1: "We run 5 feedback iterations and query 20 samples in each iteration. Same" -- 20 samples per iteration is an arbitrary number -- specifically for smaller datasets this is pretty large. (Das et al., Siddiqui et al.) used 1 query per iteration. For fair comparison, need to show results with this setting and another set of ablation experiments where the number of queries per iteration is varied.
The paper claims that one of the contributions is the sampling strategy. However, the experiments are not correctly designed to demonstrate its effectiveness. Typically, active learning algorithms have two aspects which are/should be independently replaceable: (a) the query strategy to select samples for user feedback (such as query most anomalous) and, (b) update the algorithm/model parameters with the new labeled data from user. The correct way would be to use the existing benchmark active learning algorithms and replace just their query strategy with the new strategy; then check whether the performance improves/degrades. For the query strategy to be useful and generic, it should work well with other algorithms instead of being tied to a specific one.
Section 4.2: "Specifically, we calculate F1-Score on the entire dataset after finishing an iteration of feedback." -- This is inappropriate for active learning. The F1-score should be computed on an independent dataset, or a different metric should be used suitable for active learning.
Figure 5: The plot of F1-score along the y-axis is improper. Consider for example if there are exactly 20 true anomalies in the dataset and all get detected in the first iteration. Then F1 will be 1.0 in the first round and then decrease monotonously over successive iterations simply because the precision will decrease. This gives a wrong impression about the algorithm behavior -- that the performance degrades with successive iterations, whereas, that is not true. For an active anomaly detection algorithm, it would be more appropriate to measure the % of true anomalies detected with each feedback iteration (or the AUC).
Section 3.2: "In this way, LMADA achieves feedback incorporation in a lightweight manner, only with a non-linear representation transformation." -- It is misleading to call this approach 'lighweight' just because one (last) stage appears to be simple.
Clarity, Quality, Novelty And Reproducibility
The paper is mostly easy to understand, however, some things need better clarification e.g., it is not obvious which base detector has been used for the plots in Figure 5. The paper presents a mildly novel work, but the overall architecture is quite convoluted. It is possible that the presence of the proxy neural network hinders explainability of the anomalies. |
ICLR | Title
Towards Lightweight, Model-Agnostic and Diversity-Aware Active Anomaly Detection
Abstract
Active Anomaly Discovery (AAD) is flourishing in the anomaly detection research area, which aims to incorporate analysts’ feedback into unsupervised anomaly detectors. However, existing AAD approaches usually prioritize the samples with the highest anomaly scores for user labeling, which hinders the exploration of anomalies that were initially ranked lower. Besides, most existing AAD approaches are specially tailored for a certain unsupervised detector, making it difficult to extend to other detection models. To tackle these problems, we propose a lightweight, model-agnostic and diversity-aware AAD method, named LMADA. In LMADA, we design a diversity-aware sample selector powered by Determinantal Point Process (DPP). It considers the diversity of samples in addition to their anomaly scores for feedback querying. Furthermore, we propose a model-agnostic tuner. It approximates diverse unsupervised detectors with a unified proxy model, based on which the feedback information is incorporated by a lightweight non-linear representation adjuster. Through extensive experiments on 8 public datasets, LMADA achieved 74% F1-Score improvement on average, outperforming other comparative AAD approaches. Besides, LMADA can also achieve significant performance boosting under any unsupervised detectors.
1 INTRODUCTION
Anomaly detection aims to detect the data samples that exhibit significantly different behaviors compared with the majority. It has been applied in various domains, such as fraud detection (John & Naaz, 2019), cyber intrusion detection (Sadaf & Sultana, 2020), medical diagnosis (Fernando et al., 2021), and incident detection (Wang et al., 2020). Numerous unsupervised anomaly detectors have been proposed (Zhao et al., 2019; Boukerche et al., 2020; Wang et al., 2019). However, practitioners are usually unsatisfied with their detection accuracy (Das et al., 2016), because there is usually a discrepancy between the detected outliers and the actual anomalies of interest to users (Das et al., 2017; Zha et al., 2020; Siddiqui et al., 2018). To mitigate this problem, Active Anomaly Discovery (AAD) (Das et al., 2016), is proposed to incorporate analyst’s feedback into unsupervised detectors so that the detection output better matches the actual anomalies.
The general workflow of Active Anomaly Discovery is shown in Fig.1. In the beginning, a base unsupervised anomaly detector is initially trained. After that, a small number of samples are selected to present to analysts for querying feedback. The labeled samples are then utilized to update the detector for feedback information incorporation. Based on the updated detection model, a new set of samples are recommended for the next feedback iteration. Finally, the tuned detection model is ready to be applied after multiple feedback iterations, until the labeling budget is exhausted.
Despite the progress of existing AAD methods (Das et al., 2017; Zha et al., 2020; Siddiqui et al., 2018; Keller et al., 2012; Zhang et al., 2019; Li et al., 2019; Das et al., 2016), some intrinsic limitations of these approaches still pose great barriers to their real-world applications. Firstly, most AAD methods adopt the top-selection strategy for the feedback querying (Das et al., 2017; Zha et al., 2020; Siddiqui et al., 2018; Li et al., 2019), i.e., the samples with the highest anomaly scores are always prioritized for user labeling. However, it hinders exploring the actual anomalies that are not initially scored highly by the base detector. As such, these AAD approaches are
∗Qingwei Lin is the corresponding author.
highly susceptible to over-fitting to the top-ranked samples, resulting in a suboptimal recall with respect to all anomalies. We shall demonstrate this with a real example in Sec. 2.1. Secondly, most existing AAD approaches (Das et al., 2017; 2016; Siddiqui et al., 2018) are tightly tailored for a certain kind of detection model, making it difficult to extend to other unsupervised detectors.
They need to modify the internal structure of a particular type of unsupervised detector, endowing them with the ability of feedback integration. Therefore, it is impractical and ad-hoc to re-design them each time facing such a variety of unsupervised detection models. Recent AAD methods (Zha et al., 2020; Li et al., 2019)
attempted to generalize to arbitrary detectors. However, they can barely scale because their mode size grows with the number of samples.
To tackle these problems in AAD, we propose a Lightweight, Model-Agnostic and Diversity-Aware active anomaly detection approach, named LMADA. It consists of two components, i.e, sample selector (for sample selection) and model tuner (for feedback incorporation). In the sample selector, we take the anomaly scores as well as the diversity of samples into account, instead of solely picking up the most anomalous ones for feedback querying. Specifically, we fuse anomaly scores and the feedback repulsion scores into a diversity-aware sampling technology powered by Determinantal Point Processes (DPP) (Chen et al., 2018; Kulesza et al., 2012). In the model tuner, we first leverage a neural network as the proxy model to approximate an arbitrary unsupervised detector. After that, we fix the weights of the proxy model and learn a representation adjuster on top of it. The representation adjuster is responsible for transforming the input feature vector to fit the feedback-labeled samples. Finally, each sample to be detected is transformed by the representation adjuster and then fed back to the base detector to estimate its anomaly score. In this way, the model tuner shields the details of different unsupervised detectors and achieves lightweight feedback incorporation, only via a non-linear representation transformation.
We conducted extensive experiments on 8 public AD datasets to evaluate the effectiveness of our proposed method. The experimental results show that LMADA can achieve 74% F1-Score improvement on average, outperforming other comparative AAD approaches under the same feedback sample budget. In addition, we also validated that LMADA works well under various unsupervised anomaly detectors.
2 RELATED WORK AND MOTIVATION
In this section, we will give a brief introduction to the existing AAD work and analyze their limitations from two aspects: (1) sample selection and (2) feedback incorporation.
2.1 SAMPLE SELECTION
Most AAD approaches (Siddiqui et al., 2018; Das et al., 2017; Zha et al., 2020; Li et al., 2019; Das et al., 2016) adopt the top-selection strategy. The anomalous samples, that are not ranked on the top initially by the base detector, would have little chance to be selected for feedback, and therefore can hardly be recalled subsequently. We show a real example using KDD-99 SA1, which is a famous intrusion detection dataset. The dataset contains one normal class (96.7%) and 11 anomalous classes (3.3%) of various intrusion types. We applied the Isolation Forest (Liu et al., 2012) detector (a widely accepted one) to this dataset and found that the recall was around 0.28. We show the anomaly score distribution for the normal samples and three major intrusion types, respectively, in Fig. 2. Only the samples of two intrusion types, i.e., “neptune” and “satan”, are assigned high anomaly scores (0.60 ∼ 0.70). However, the samples of another major intrusion type “smurf” (accounts for 71.27% of all anomalous samples) are assigned relatively low anomaly scores (0.50 ∼ 0.55), which is even below the anomaly scores of many normal samples (4168 normal samples vs. 15 “smurf” anomalies were assigned anomaly scores over 0.55). Under this circumstance, selecting the top samples only for feedback can hardly improve the recall for the “smurf” type. In LMADA, we consider both anomaly scores as well as the diversity of samples during the sample selection. In this way, samples
1https://archive.ics.uci.edu/ml/machine-learning-databases/kddcup99-mld/kddcup.data.gz
not initially ranked on the top, like the “smurf” anomalies in our example, can have an opportunity to present to analysts.
2.2 FEEDBACK INCORPORATION
How to incorporate feedback information is another focus of AAD. Das et al.(Das et al., 2017) added a set of adjustable weights to the random projections generated by LODA detector (Pevnỳ, 2016), by which the feedback can be incorporated. They also modified Isolation Forest (Liu et al., 2012) by adding weights to the tree paths, re-weighting the isolation score based on the feedback (Das et al., 2016). Siddiqui et al.(Siddiqui et al., 2018) extended the re-weighting strategy to the Generalized Linear Anomaly Detectors (GLAD) with the help of online convex optimization (Hazan et al., 2016). iRRCF-Active (Wang et al., 2020) also borrowed the above similar idea into iRRCF detector (Guha et al., 2016). In summary, the above methods require tailoring the weights specific to the certain model structure of different unsupervised detectors and then adjusting the weights with feedback-labeled samples by gradient descent. However, it is impractical for such a diverse range of unsupervised detectors as the modification is sophisticated and case-by-case. In LMADA, we propose a model-agnostic method to incorporate feedback information, regardless of the type of unsupervised detectors.
We also note that some AAD approaches have been proposed and attempted to support arbitrary base detectors. Meta-AAD (Zha et al., 2020) first extracts a set of transferable features based on k-neighbors to labeled instances and feeds them into a pre-trained meta-policy model for detection. GAOD (Li et al., 2019) leverages label spreading (Zhou et al., 2003), a graph-based semi-supervised model, to iteratively spread label information to neighbors. In summary, both AAD methods leverage neighborhoods of labeled instances to exploit feedback information but require persisting the entire dataset for neighboring sample retrieval. Therefore, the final tuned detection model would become increasingly heavier and heavier. In this paper, the feedback incorporation of LMADA is achieved by only a non-linear transformation, which is lightweight enough for real-world application.
3 APPROACH
In this section, we will elaborate on the details about LMADA. Following the general AAD workflow shown in Fig.1, LMADA consists of two components, i.e., sample selector and model tuner. In the sample selector, we consider the diversity in addition to the anomaly scores when recommending valuable samples for labeling. In the model tuner, we proposed a model-agnostic strategy to incorporate feedback information for arbitrary unsupervised detectors. It is achieved in a lightweight manner, only relying on a simple non-linear transformation.
3.1 SAMPLE SELECTOR
As discussed in Sec. 2.1, sample selection of AAD should consider the diversity of the selected samples in addition to the anomaly scores. The diversity here is not in terms of anomaly scores but in the distribution of the samples. In summary, our attempt is to select a subset of samples with high anomaly scores, and meanwhile, are dissimilar from each other. We use the example shown in Fig. 3 to illustrate this idea. There are two types of anomalies A and B that stray from the majority of samples. The anomaly scores (based on the Isolation Forest) are indicated by the colors. The
deeper the color, the higher the anomaly score. The selected samples are indicated by the blue cross markers. The number of selected samples is fixed as 20. Type-B anomalies are assigned relatively lower anomaly scores compared with type-A because they are more adjacent to the normal samples.
If we use the top-selection strategy, the selected samples would mostly come from type-A (as shown in the left subfigure of Fig.3), which may not cover the other types of anomalies. Therefore, the feedback would not help the AAD to recall more anomalies, e.g., type-B in this example. The desired sample selection is shown in the right subfigure of Fig.3, where the selector achieves a good coverage for all samples with relatively high anomaly scores. In this way, we can enhance the anomaly scores of all anomaly types, instead of only those originally ranked high by the base detector.
Inspired by (Chen et al., 2018), we leverage a widely-adopted diversity sampling method, i.e., Determinantal Point Processes (DPP) (Kulesza et al., 2012), to achieve the above sampling target. We first introduce DPP in Sec. 3.1.1, and then describe how we balance the dual objectives, i.e., anomaly score and diversity, in Sec. 3.1.2.
3.1.1 DETERMINANTAL POINT PROCESSES (DPP)
The Determinantal Point Process (DPP) was originally introduced from fermion systems in thermal equilibrium (Macchi, 1975; Chen et al., 2018). Recently, it has been successfully applied to various machine learning tasks, e.g., image search (Kulesza & Taskar, 2011a), document summarization (Kulesza & Taskar, 2011b) and recommendation systems (Gillenwater et al., 2014). Given a dataset D = {s1, s2, ..., sn}, DPP aims to select a subset C from D. Specifically, DPP constructs a real positive semidefinite (PSD) kernel matrix L ∈ Rn×n derived from D. For each subset C ⊆ D, the probability of selecting C from D, denoted as P (C), is proportional to det(LC), where det(LC) is the determinantal value of the principal minor LC . The objective of DPP is to derive C∗ which maximizes the value of det(LC), shown in Eq.1. As an example, to achieve maximum diversity, the kernel matrix could be constructed as the pairwise similarity matrix (Kulesza et al., 2012).
C∗ = argmaxC⊆Ddet(LC) (1)
How to approximately solve this NP-hard problem (Ko et al., 1995) has been well studied in (Gillenwater et al., 2012; Han et al., 2017; Li et al., 2016; Chen et al., 2018) and we adopt the greedy algorithm proposed in (Chen et al., 2018) in our paper. We will introduce how to construct a specially tailored kernel matrix L for AAD in the next section.
3.1.2 KERNEL MATRIX CONSTRUCTION
In LMADA, we construct a kernel matrix L, whose entries can be formally written as Eq.2,
Lij = ⟨airisi, ajrjsj⟩ = aiajrirj ⟨si, sj⟩ (2)
where ai denotes the anomaly score uniformly re-scaled in the range of [0, 1]. It is used to motivate DPP to select samples with high anomaly scores. Meanwhile, we need to select diverse samples within and across feedback iterations. In each feedback iteration, the inner product ⟨si, sj⟩ measures the pairwise similarity of all candidate samples, based on which DPP prefers dissimilar samples (Kulesza et al., 2012). As there are multiple feedback iterations, we expect the samples selected in the current iteration are also different from those sampled in previous iterations. To achieve so, we maintain a data pool P preserving the selected samples from the previous feedback iterations. The minimum distance between a candidate sample si and the selected samples cached in P , is defined as the feedback repulsion score ri, as shown in Eq.3.
ri = min({1−⟨si, sk⟩ |∀sk ∈ P}) (3)
From Eq.2, we can conclude that det(LC) is proportional to aiajrirj and is inversely proportional to ⟨si, sj⟩ among the selected samples in C. In this way, it induces DPP to select more anomalous
(i.e., higher aiaj) data points that are not adjacent to the previously selected examples (i.e., higher rirj). Meanwhile, the data points are also distinguish enough from each other (i.e., lower ⟨si, sj⟩). The qualitative analysis can be referred to Appendix Sec.A.1.
Theoretically, the complexity of constructing L is O ( n2 ) , which is expensive for a large dataset. However, anomalous samples generally account for a small percentage of the whole dataset compared with the normal class (Zhao et al., 2019; Boukerche et al., 2020). For the instance in KDD99SA dataset introduced in Sec.2.1, only 3.3% of samples belong to anomalies. It is unnecessary to regard all samples as candidates for the sample selector. Consequently, we construct the kernel matrix with only the pre-truncated top α% samples ranked by their anomaly scores. In general, if α is small enough (e.g., < 3%), the selected samples would be those with the highest anomaly scores, i.e., similar to the top-selection. On the other hand, if α is large (e.g., > 30%), the selected samples would become too diverse to retrieve samples worthwhile for feedback. We will evaluate different α settings in Appendix Sec.A.8.
3.2 MODEL TUNER
After labeling the examples recommended by the sample selector, the model tuner focuses on how to incorporate newly labeled data points. The model tuner should be agnostic to the base unsupervised detectors. In other words, any unsupervised detection model can be easily integrated into our framework. To achieve this goal, we propose a three-phases model tuner in LMADA, as shown in Fig. 4. Firstly, we set up a neural network as the proxy model (Coleman et al., 2019) to mimic the behaviors of diverse base detectors. After that, a representation adjuster is added in front of the frozen proxy model to get trained based on the labeled samples. Finally, the tuned representation adjuster is used to transform the original samples into new representation vectors, which will be fed back to the base detector for re-scoring. The feedback continues for multiple iterations until the sampling budget is exhausted. The tuned representation adjuster can be applied as illustrated in the Phase-3 of Fig.4. Given a testing sample si, we first transform it into a new representation vector hi via the representation adjuster Ω (si). Then we directly feed hi into the base anomaly detector f and get the final detection results f (hi). In this way, LMADA achieves feedback incorporation in a lightweight manner, only with a non-linear representation transformation.
3.2.1 PROXY MODEL APPROXIMATION
As introduced in Sec. 2.2, unsupervised detectors of various types pose a great challenge to modelagnostic AAD. There are significant differences between the model structures of different unsupervised detectors. Most existing AAD work (Siddiqui et al., 2018; Das et al., 2017; 2016; Wang et al., 2020) needs to specifically modify the internal structure of unsupervised detectors.
To tackle this problem, we utilize a deep neural network as the proxy model to approximate the behaviors of diverse unsupervised detectors. In this way, we can turn unsupervised detectors into gradient-optimizable neural networks, which facilitate the subsequent representation adjuster tuning (more details presented in Sec.3.2.2). As shown in Phase-1 of Fig. 4, we use the normalized anomaly scores f(si) generated by the base detector as the pseudo-labels and set up a neural network Φ in parallel to fit them. The proxy model is composed of one input layer and multiple hidden layers followed by an output layer activated by the sigmoid function. The Mean-
Squared-Error (MSE) is adopted as the loss function during proxy model training, as shown in Lproxy = ∑b i=1 (Φ (si)− f (si)) 2, where b denotes the batch size.
After the proxy model training, the anomalous patterns that are captured by the base detectors have been learned by the proxy model, i.e., the proxy anomaly scores Φ (si) ≈ f (si). The key point here is that the internal structures of different unsupervised detectors do not need to be considered in this training process.
3.2.2 REPRESENTATION ADJUSTER TUNING
In Phase-2, we devise a representation adjuster Ω in front of the proxy model to incorporate the feedback information. The representation adjuster is a simple non-linear transformation layer, which takes the original sample vector si as the input and transforms it into a new feature space but with the same dimensions, i.e., hi = Ω(si) = sigmoid (Wsi),where hi ∈ Rd and si ∈ Rd. As shown in the middle of Fig.4, the transformed hi will be fed into the trained proxy model Φ and generate the proxy anomaly score Φ (hi). Based on that, W will be updated under the loss function in Eq.4. The representation adjuster can be trained by a gradient descent optimizer because the subsequent proxy model (as shown in Fig. 4) is also a neural network. The parameters of the proxy model are frozen during the representation adjuster tuning phase.
Ladjuster = Lfeedback + Lconsolidation + η (4)
Ladjuster is composed of three components, i.e., feedback loss, consolidation loss and a regularization item η. Lfeedback is used to fit the labeled samples in the data pool P , as shown in Eq.5, where yi represents the feedback label (+1 for the anomalous class and -1 for the normal class) for the sample si.
Lfeedback = − b∑
i=1
yi ∗ log (Φ (hi)) ,∀si ∈ P (5)
Training with only a few labeled samples would make the representation adjuster biased toward the feedback labels but ignore the patterns already learned from the base detector. So we design another component inspired by (Li & Hoiem, 2017), i.e., Lconsolidation , that serves for consolidating the knowledge of the base unsupervised detector, as shown in Eq.6. h̃i denotes the transformed sample representation in the last feedback iteration (h̃i = si in the first feedback iteration). It forces the proxy anomaly scores Φ (hi) of the remaining unlabeled samples to be stabilized around the original anomlay scores f ( h̃i ) in the newly transformed feature space. We note that Lconsolidation
is not conducive to fitting Lfeedback as the former tends to remain the original representation. To achieve a trade-off between them, we assign a weight for the consolidation loss of each sample. Intuitively, if an unlabeled sample si is similar to the labeled samples in the feedback data pool P , its consolidation loss should have a lower weight, reducing the constraints for fitting Lfeedback . On the contrary, those unlabeled samples, which are unlike the data points in P , should be assigned a higher weight to enhance the influence of the consolidation loss. This intuition is fully aligned with the feedback repulsion score ri introduced in Sec.3.1.2 and we thus use it as the weight of consolidation loss.
Lconsolidation = b∑
i=1
ri ∗ ( Φ (hi)− f ( h̃i ))2 ,∀si /∈ P (6)
The last component is the penalty for feature space transformation because the extremely dramatic change to the original sample vectors is undesired. To achieve so, we set η as ∑b i=1 ||hi − si||2. More training details for the representation adjuster can be found in Appendix Sec.A.2.
4 EXPERIMENT
4.1 DATASETS AND SETTINGS
We evaluated our proposed method on 8 public datasets, including PageBlocks, Annthyroid, Cardio, Cover, KDD99-Http, Mammography, KDD99-SA, Shuttle, which are widely used by existing AAD
approaches (Siddiqui et al., 2018; Zha et al., 2020; Li et al., 2019; Das et al., 2017; 2019). The details of these datasets can be found in Appendix Sec. A.3. We run 5 feedback iterations and query 20 samples in each iteration. Same as the existing work, we used simulation instead of real user feedback since all the ground truth is known for these public datasets. The experimental environment and the parameters setting can be found in Appendix Sec. A.4 and Sec. A.5, respectively.
4.2 COMPARISON METHODS AND METRICS
We compared LMADA with three state-of-the-art AAD methods, i.e., FIF (Siddiqui et al., 2018), Meta-AAD (Zha et al., 2020), and GAOD (Li et al., 2019). FIF adds a set of weights to the tree branches of the Isolation Forest detector and tunes them via online convex optimization with feedback information. GAOD utilizes the semi-supervised method (label spreading Zhou et al. (2003)) to consume user feedback. Both of the above approaches adopt the top-selection strategy. Meta-AAD extracts a set of transferable features for a pre-trained meta-policy detection model, considering both long-term and short-term benefits in querying feedback.
We use F1-Score Curve to evaluate the effectiveness of different AAD methods. Specifically, we calculate F1-Score on the entire dataset after finishing an iteration of feedback. Besides, we also calculate the Area-Under-Curve (AUC) (Ling et al., 2003) of the F1-Score Curve.
4.3 COMPARISON EXPERIMENT RESULTS
We compared our proposed method with three state-of-the-art AAD approaches and the results are illustrated in Fig. 5. For fairness, we used Isolation Forest as the base detector because it was adopted by all the comparison methods (Zha et al., 2020; Siddiqui et al., 2018; Li et al., 2019). To ensure reproducibility, we repeated our experiments 10 times on each dataset and plotted the average F1Score and the standard error bar (Altman & Bland, 2005). The AUC value of each F1-Score Curve is shown in the legend.
From the results, we can confirm that LMADA performs better than other AAD methods. With 20 feedback samples per iteration, LMADA achieved consistently higher F1-Score on most datasets. Especially on KDD99-SA, Cover, and Cardio datasets, LMADA boosted the F1-Score of the base detector by an average of 144% to 0.80+ after 5 feedback iterations. For PageBlocks, Annthyroid, and Mammography datasets, LMADA also increased the F1-Score by 60% on average, significantly outperforming other AAD models. As for the KDD99-Http and Shuttle dataset, we can see that the initial performance of the base detector has reached a relatively high level. Under this circumstance, LMADA also can hold a high detection accuracy, exhibiting its robustness.
Among the comparison methods, Meta-AAD performed much better than the other two because it utilizes reinforcement learning to learn a meta-policy for feedback querying, rather than simply picking up the samples with the highest anomaly scores. However, the diversity of samples is not taken into account explicitly, resulting in relatively lower performance compared with LMADA (e.g. 0.29 AUC of Meta-AAD vs. 0.87 AUC of LMADA in KDD99-SA dataset). FIF and GAOD even had difficulty preserving the upward trend of their F1-Score curves, although more feedback samples were added. As we discussed in Sec.2.1, the top-selection strategy of both methods hinders the exploration of the lower-ranked anomalous samples. Moreover, their detectors were tuned to over-fit the scarce feedback-labeled samples, leading to a decreasing recall. We have verified this in Appendix Sec. A.9.
4.4 MODEL-AGNOSTIC EVALUATION
We target to propose a model-agnostic AAD approach, which can be easily extended to arbitrary unsupervised detectors. As such, we evaluated the effectiveness of LMADA under five different but commonly-used unsupervised detectors, including AutoEncoder (Vincent et al., 2010), PCA (Shyu et al., 2003), OCSVM (Schölkopf et al., 2001), LODA (Pevnỳ, 2016; Das et al., 2016), and IF. The experimental settings are the same as that in Sec.4.3 and the results are shown in Fig. 6.
From these figures, we can conclude that LMADA works well on different unsupervised detectors. It can consistently improve the F1-Score on all eight datasets whatever the base detector is adopted. More than that, we also found that the performance gains achieved by LMADA vary with different unsupervised detectors. Taking the KDD99-Http dataset as an example, we can see that LODA performs much worse than the other base detectors at the beginning (F1-Score 0.02 compared to ∼0.82 of the other detectors). Even so, LMADA was also able to improve the performance of LODA from 0.02 to 0.96 after 5 iterations. We also noted that the variance of its results is significantly larger than the others. The reason is that LODA is inaccurate and unstable on KDD99-Http dataset, making it difficult to provide effective information for the sample selector and the model tuner. These experiment results confirm that the initial performance of base detectors has a great influence to AAD approaches.
4.5 SAMPLE SELECTOR VALIDATION
In this section, we validated the effectiveness of our proposed sample selector in LMADA. As we discussed in Sec. 2.1, diversity plays a critical role in AAD. In order to verify this point, we conducted an ablation study on the KDD99-SA dataset. In this dataset, 11 anomalous classes and the normal class are well annotated separately so that we can study how samples would be selected by different sampling strategies. We compared our proposed sampling method with the commonly-used
top-selection strategy (Das et al., 2017; 2016; Siddiqui et al., 2018), and the stratified sampling described in (Guha et al., 2016) (i.e., divide samples into g groups based on their anomaly scores and then select examples randomly from each group). The model tuner is fixed. The selected anomalous classes under these settings and their corresponding improved F1-Scores are shown in Fig. 7(a) and Fig. 7(b), respectively.
From Fig.7(a), we can see that the sample selector of LMADA is able to cover more anomaly classes, compared with the other two sampling strategies. Furthermore, we also confirm the necessity of the diversity-aware selection from Fig.7(b) since our sample selector achieved much higher F1-Scores than those under the top-selection or the stratified sampling methods. For example, in the first feedback iteration, our proposed sample selector chose “smurf” samples (shown in blue color) for feedback, which were missed by the other two. As we stated in Sec.2.1, “smurf” samples were not assigned high anomaly scores by the base detector (IF) but they actually account for 71.27% of all anomalies. Therefore, we can see that F1-Score can be significantly improved from 0.28 to 0.94 with labeled “smurf” anomalies, while the other two strategies failed to achieve this high F1-Score. The complete results on all datasets can be found in Appendix Sec. A.6.
4.6 MODEL TUNER VALIDATION
In this section, we will present the effectiveness of our proposed model tuner. As introduced in Sec.3.2, the transformed representations hi are trained based on the proxy model but will be fed back to the base unsupervised detector to get the final anomaly scores. We aim to study how large the difference between the anomaly scores generated by the base detector f (hi) and the proxy model Φ (hi), respectively. We also conducted this ablation experiment on the KDD99-SA dataset and the results are exhibited in Fig. 7(c).
This figure shows that there is only a narrow gap in F1-Scores between the proxy model (green line) and the base unsupervised detector (red line). It manifests that the proxy model has captured the knowledge learned by the base detection method as they produced similar anomaly scores. As such, the transformed representations hi trained via the proxy model can be smoothly transferred to the base unsupervised detector. The complete experimental results on all datasets can be referred to Appendix Sec. A.7.
5 CONCLUSION
In this paper, we propose LMADA, a lightweight, model-agnostic and diversity-aware active anomaly detection method. In the sample selector of LMADA, we take the anomaly scores as well as the diversity of samples into account, unlike most existing AAD work that solely picks the most anomalous ones for feedback querying. In the model tuner of LMADA, we propose a model-agnostic strategy to incorporate feedback information, regardless of the type of unsupervised detector. It can be achieved by a lightweight non-linear transformation. Through the extensive evaluation on 8 public AD datasets, we show that LMADA can achieve 74% F1-Score improvement on average, significantly outperforming other comparative AAD approaches.
A APPENDIX
A.1 THE QUALITATIVE ANALYSIS OF EXTENDED DPP IN SAMPLE SELECTOR
The kernel matrix L is shown as Eq.2. As introduced in Sec.3.1.1, we aim to select a subset C with highest det (LC). The principal minor LC is as follows. a21r 2 1 ⟨s1, s1⟩ · · · a1ajr1rj ⟨s1, sj⟩ · · · a1a|C|r1r|C| 〈 s1, s|C| 〉 a2a1r2r1 ⟨s2, s1⟩ · · · a2ajr2rj ⟨s2, sj⟩ · · · a2a|C|r2r|C| 〈 s2, s|C| 〉 ... . . . ... . . . ... aia1rir1 ⟨si, s1⟩ · · · aiajrirj ⟨si, sj⟩ · · · aia|C|rir|C| 〈 si, s|C| 〉 ... . . . ... . . . ...
a|C|a1r|C|r1 ⟨sc, s1⟩ · · · a|C|ajr|C|rj 〈 s|C|, sj 〉 · · · a2|C|r 2 |C| 〈 s|C|, s|C| 〉
(7)
The det (LC) can be calculated in Eq 8. det (LC) = ∑ (−1)τ(p1,p2,...p|C|) L1p1L2p2 · · ·L|C|p|C| (8)
where p1, p2, ...p|C| denote all permutations of {1, 2, . . . |C|}, and τ ( p1, p2, . . . p|C| ) represents the reverse order number of p1, p2, ...p|C|. According to Eq.2, det (LC) can be further expanded as Eq. 9
det (LC) = |C|∏ i=1 a2i r 2 i ∑ (−1)τ(p1,p2,...p|C|) ⟨s1, sp1⟩ ⟨s2, sp2⟩ · · · 〈 s|C|, sp|C| 〉 (9)
= |C|∏ i=1 a2i r 2 i · ∣∣∣det([s1, s2, ...s|C|]⊤ [s1, s2, ...s|C|])∣∣∣ (10) =
|C|∏ i=1 a2i r 2 i · ( s1 ⊗ s2 ⊗ ...⊗ s|C| )2 = |C|∏ i=1 a2i r 2 i · V 2 (11)
|C|∏ i=1 a2i r 2 i is the common factor extracted from det (LC). As such, we can conclude that det (LC) is proportional to ai and ri, inducing DPP to select samples that have high anomaly scores and are different from those have already been selected in the data pool P .
The second term, ∑ (−1)τ(p1,p2,...p|C|) ⟨s1, sp1⟩ ⟨s2, sp2⟩ · · · 〈 s|C|, sp|C| 〉 , can be further rewrote
as the exterior product form ( s1 ⊗ s2 ⊗ ...⊗ s|C| )2 shown in Eq.11. According to the definition of exterior product (Browne, 2012), it geometrically represents the volume V of the parallel polyhedron spanned by vectors {s1, s2, ...s|C|}. Consequently, the more dissimilar they are, the larger the volume V of the spanned polyhedron is, the larger det (LC) is.
A.2 LABELED SAMPLES OVERSAMPLING
In the model tuner, we use the labeled samples to train the representation adjuster. Nevertheless, compared to the unlabeled samples, the feedback-labeled samples only account for a tiny percentage of the overall dataset (e.g., 20 samples per iteration vs. 286048 samples in total of the Cover dataset). Therefore, we need to over-sample the labeled samples in each training batch to improve the utilization of such a few feedback samples, so that we can fully exploit the feedback information and accelerate the loss convergence. Half of each training batch are labeled samples, which are repeatedly drawn from the data pool P , and the other half are unlabeled samples, which are randomly sampled from the all unlabeled samples.
A.3 DATASETS INFORMATION
We used eight public datasets for the evaluation. PageBlocks, Annthyroid, Cardio, Cover, Mammography, Shuttle are available in ODDS 2. KDD99-Http and KDD99-SA are available in UCI Machine
2http://odds.cs.stonybrook.edu/
Datasets Samples Dimension Anomaly Number Anomaly Rate
Learning Repository3. PageBlocks can be referred to ADBench 4. The detailed information of these datasets is shown in Table.1. The number of samples ranges from 1.8K to 286K and the anomaly rate is spanning from 0.96% to 9.61%.
A.4 EXPERIMENT ENVIRONMENT
We built LMADA based on PyTorch 1.12.0 (Paszke et al., 2019) and used base unsupervised anomaly detectors implemented in PyOD 1.0.3 (Zhao et al., 2019). In our experiments, we set up a Virtual Machine (VM) with 64 Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz processors and 256GB RAM. The operating system is Ubuntu-20.04. In the VM, we had an NVIDIA Tesla M40 GPU with CUDA 11.4 for deep learning model training.
A.5 EXPERIMENT SETTING DETAILS
LMADA: For the sample selector of LMADA, we set the pre-truncation rate α = 10%. We introduce two hyper-parameters λ and γ to adjust the preference of anomaly score and diversity (Lij = (aiaj) λ (rirj ⟨si, sj⟩)γ). In the experiments, we set λ = 1 and γ = 1. In the model tuner, we utilized the Adam optimizer (Kingma & Ba, 2014) and set the epoch number to 10, the learning rate to 0.01, and the batch size to 512, for both the proxy model approximation phase and the representation adjuster tuning phase. The size of the proxy model hidden layer is set to 64. Specifically for SA dataset, we performed dimension reduction (Carreira-Perpinán, 1997) on it because it is characterized by its high feature dimensions and sparsity.
Meta-AAD: We used the source code available in the link provided by the original paper5. We utilized 12 datasets (including toy, yeast, glass, ionosphere, lympho, pima, thyroid, vertebral, vowels, wbc, wine, yeast) for meta-policy training in our experiment. All the datasets are available in the released code repository 6. After that, we directly applied the trained meta-policy to the targeted 8 public datasets. We borrowed the the default settings from the original paper in our experiments: rollout steps T = 128, entropy coefficient c2 = 0.01, learning rate lr = 2.5× 10−4, value function coefficient c1 = 0.5, λ = 0.95, clip range ϵ = 0.2, balance parameter γ = 0.6.
FIF: We used the source code released in the link provided by the original paper 7. We chose the Log-Likelihood loss function for FIF in the experiment. We set the type of regularizer w = 2 and the learning rate a = 1.
GAOD: We implemented GAOD according to Li et al. (2019) by ourselves because lacking the released source code. We set the number of nearest neighbors k = 30 and the learning rate of label spreading α = 0.995. The standard deviation of Gaussian function σ is set to half of the 95-percentile of k-th nearest neighbor distances.
3https://archive.ics.uci.edu/ml/machine-learning-databases/kddcup99-mld/kddcup.data.gz 4https://github.com/Minqi824/ADBench 5https://github.com/daochenzha/Meta-AAD 6https://github.com/daochenzha/Meta-AAD/tree/master/data 7https://github.com/siddiqmd/FeedbackIsolationForest
We note that the pairwise distance matrix is required for Meta-AAD and GAOD (for neighborhood retrieval). As such, both approaches would fail to work under large data volume due to the high space complexity (O(n2)). Taking the largest dataset Cover as an example (shown in Table.1), the pairwise distance matrix would consume 610 GB memory in theory, which would trigger the OutOf-Memory (OOM) problem in our experiment environment. Therefore, we only keep the top 50% and 20% samples for KDD99-SA and Cover, respectively, based on the anomaly scores produced by the base detector. Only these samples are involved in the feedback incorporation of Meta-AAD and GAOD.
A.6 THE COMPLETE RESULTS OF SAMPLE SELECTOR VALIDATION
We illustrated the sample selector validation results on all 8 datasets in Fig.8. Our sampling strategy outperforms other sampling methods on most datasets. Compared with the results of FIF and GAOD shown in Fig.5, we also found that our proposed method still achieved much better F1-Scores even using the top-selection strategy in the same manner. It confirms the effectiveness of our proposed model tuner on the other side.
A.7 THE COMPLETE RESULTS OF MODEL TUNER VALIDATION
We show the model tuner validation results on all eight datasets in Fig.9. From these figures, we confirm the conclusion in Sec. 4.6. The proxy model has captured the knowledge learned by the base detection method as they produced similar anomaly scores. As such, the transformed representation hi can be directly fed into the base detector.
A.8 EFFECTIVENESS OF PRE-TRUNCATION IN SAMPLE SELECTOR
In Sec. 3.1.2, we introduced the pre-truncation to improve the sampling efficiency. In this section, we aim to validate its effectiveness in the sample selector. Specifically, we adjusted α from 1% to 60%. We recorded the running time and its corresponding AUC of F1-Score Curve under different α values, which are shown in Fig.10. From the left figure of Fig. 10, we can draw a conclusion that the running time can be significantly reduced by more pre-truncation. For example, the running time can be saved in half if we adjust α from 50% to ∼6%. Moreover, from the right figure of Fig. 10, we can see that the AUC of F1-Score arises when α < 10% and then gradually drops when we keep increasing α. As we have discussed in Sec. 3.1.2, it manifests that either a too broad or a too narrow set of candidate samples leads to suboptimal feedback querying. Generally speaking, we set α around the estimated contamination ratio, such as 10%.
A.9 EXPLOARATION OF OVER-FITTING PROBLEM
In Sec.4.3, we found that the comparison methods performed much worse than LMADA. From the feedback incorporation perspective, it is caused by the overfitting to the few top-ranked samples (see Sec.1). To verify this point, we take GAOD as an example and gradually increase the number of querying samples in each feedback iteration to mitigate the overfitting problem. We rerun GAOD on three datasets (PageBlocks, Shuttle and KDD99-SA), where it did not perform well. According to the settings described in the original GAOD paper, the size of the data pool should be set to 2 × #outliers (Li et al., 2019). Therefore, we enlarged the data pool size spanning from 0.5 to 2 × #outliers by a stride of 0.5. From the results shown in Fig. 11, we see that GAOD can only achieve improvements in F1-Score with at least 0.5× #outliers (e.g., the number of queried samples reaches 168 per iteration in KDD99-SA dataset, which is far beyond our proposed approach with 20 per iteration). Therefore, it requires a significantly larger labeling effort.
A.10 QUERY NUMBER EXPLORATION
We conducted the comparison experiment under different query numbers per feedback iteration (1, 5, 10, 20) on KDD-SA dataset, which can be found in Fig.12. From the figure, we can see that LMADA can achieve a consistent performance improvement, even with only 1 sample per iteration. On the contrary, the F1-Scores of FIF/GAOD/Meta-AAD fail to increase because they only select the top-ranked samples for updating the model, ignoring the low-ranked anomaly samples, such as the “smurf” type (as we presented in Sec.2.1).
A.11 ADDITIONAL EXPERIMENT
We add the experimental results of the top-random query strategy in Fig.13, which represents a random selection from samples with high anomaly scores. From the results, we can conclude that our sampling method significantly outperforms the top-random on PageBlocks, Cardio, Cover, Mammography, KDD99-SA datasets and achieve similar performance on Annthyroid, KDD99-Http, and Shuttle datasets. Moreover, it is worth noting that the variance of the top-random strategy is much larger than that of ours. | 1. What are the main contributions of the paper in addressing the two critical issues in Active Defense (ADD)?
2. What are the strengths of the proposed method, particularly in its diversity-aware sample selector and model-agnostic tuner?
3. Are there any weaknesses or limitations in the proposed approach?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper focuses on the two critical issues in ADD problem and proposes a lightweight, model-agnostic, and diversity-aware AAD method. The first issue is the negligence of lower-ranked samples in active learning, and the second issue is the generalization of the specified unsupervised detector to other models given different datasets or tasks. The new models manage to address both issues and demonstrate on several benchmarks.
Strengths And Weaknesses
Strength:
An interesting diversity-aware sample selector that not only considers the base detector scores but also the diversity of the samples was proposed and developed.
A novel model-agnostic tuner is developed to integrate both the base model and user feedback.
More specifically, first, a proxy model is developed to mimic the base detector. Second, the proxy network is frozen, and a new representation adjustor is added to learn new feature space, which seems a concise and effective design.
Weakness:
There is no significant weakness identified in this paper. While the sample selector might be time-consuming and the time complexity of Determinantal Point Process (DPP) is O(n^2), the run time, in reality, will most likely not be overwhelming due to the limited amount of anomaly samples. Another minor to consider is adding additional experiments, although the current meta-ADD dataset has 24 tasks.
Clarity, Quality, Novelty And Reproducibility
The paper is organized and written well. The paper is novel in the sense that the authors identified two issues in AAD, and manage to solve them through an interesting framework that makes itself distinct from conventional solutions. The details are well elaborated, based on which the paper can be reproduced. |
ICLR | Title
Towards Lightweight, Model-Agnostic and Diversity-Aware Active Anomaly Detection
Abstract
Active Anomaly Discovery (AAD) is flourishing in the anomaly detection research area, which aims to incorporate analysts’ feedback into unsupervised anomaly detectors. However, existing AAD approaches usually prioritize the samples with the highest anomaly scores for user labeling, which hinders the exploration of anomalies that were initially ranked lower. Besides, most existing AAD approaches are specially tailored for a certain unsupervised detector, making it difficult to extend to other detection models. To tackle these problems, we propose a lightweight, model-agnostic and diversity-aware AAD method, named LMADA. In LMADA, we design a diversity-aware sample selector powered by Determinantal Point Process (DPP). It considers the diversity of samples in addition to their anomaly scores for feedback querying. Furthermore, we propose a model-agnostic tuner. It approximates diverse unsupervised detectors with a unified proxy model, based on which the feedback information is incorporated by a lightweight non-linear representation adjuster. Through extensive experiments on 8 public datasets, LMADA achieved 74% F1-Score improvement on average, outperforming other comparative AAD approaches. Besides, LMADA can also achieve significant performance boosting under any unsupervised detectors.
1 INTRODUCTION
Anomaly detection aims to detect the data samples that exhibit significantly different behaviors compared with the majority. It has been applied in various domains, such as fraud detection (John & Naaz, 2019), cyber intrusion detection (Sadaf & Sultana, 2020), medical diagnosis (Fernando et al., 2021), and incident detection (Wang et al., 2020). Numerous unsupervised anomaly detectors have been proposed (Zhao et al., 2019; Boukerche et al., 2020; Wang et al., 2019). However, practitioners are usually unsatisfied with their detection accuracy (Das et al., 2016), because there is usually a discrepancy between the detected outliers and the actual anomalies of interest to users (Das et al., 2017; Zha et al., 2020; Siddiqui et al., 2018). To mitigate this problem, Active Anomaly Discovery (AAD) (Das et al., 2016), is proposed to incorporate analyst’s feedback into unsupervised detectors so that the detection output better matches the actual anomalies.
The general workflow of Active Anomaly Discovery is shown in Fig.1. In the beginning, a base unsupervised anomaly detector is initially trained. After that, a small number of samples are selected to present to analysts for querying feedback. The labeled samples are then utilized to update the detector for feedback information incorporation. Based on the updated detection model, a new set of samples are recommended for the next feedback iteration. Finally, the tuned detection model is ready to be applied after multiple feedback iterations, until the labeling budget is exhausted.
Despite the progress of existing AAD methods (Das et al., 2017; Zha et al., 2020; Siddiqui et al., 2018; Keller et al., 2012; Zhang et al., 2019; Li et al., 2019; Das et al., 2016), some intrinsic limitations of these approaches still pose great barriers to their real-world applications. Firstly, most AAD methods adopt the top-selection strategy for the feedback querying (Das et al., 2017; Zha et al., 2020; Siddiqui et al., 2018; Li et al., 2019), i.e., the samples with the highest anomaly scores are always prioritized for user labeling. However, it hinders exploring the actual anomalies that are not initially scored highly by the base detector. As such, these AAD approaches are
∗Qingwei Lin is the corresponding author.
highly susceptible to over-fitting to the top-ranked samples, resulting in a suboptimal recall with respect to all anomalies. We shall demonstrate this with a real example in Sec. 2.1. Secondly, most existing AAD approaches (Das et al., 2017; 2016; Siddiqui et al., 2018) are tightly tailored for a certain kind of detection model, making it difficult to extend to other unsupervised detectors.
They need to modify the internal structure of a particular type of unsupervised detector, endowing them with the ability of feedback integration. Therefore, it is impractical and ad-hoc to re-design them each time facing such a variety of unsupervised detection models. Recent AAD methods (Zha et al., 2020; Li et al., 2019)
attempted to generalize to arbitrary detectors. However, they can barely scale because their mode size grows with the number of samples.
To tackle these problems in AAD, we propose a Lightweight, Model-Agnostic and Diversity-Aware active anomaly detection approach, named LMADA. It consists of two components, i.e, sample selector (for sample selection) and model tuner (for feedback incorporation). In the sample selector, we take the anomaly scores as well as the diversity of samples into account, instead of solely picking up the most anomalous ones for feedback querying. Specifically, we fuse anomaly scores and the feedback repulsion scores into a diversity-aware sampling technology powered by Determinantal Point Processes (DPP) (Chen et al., 2018; Kulesza et al., 2012). In the model tuner, we first leverage a neural network as the proxy model to approximate an arbitrary unsupervised detector. After that, we fix the weights of the proxy model and learn a representation adjuster on top of it. The representation adjuster is responsible for transforming the input feature vector to fit the feedback-labeled samples. Finally, each sample to be detected is transformed by the representation adjuster and then fed back to the base detector to estimate its anomaly score. In this way, the model tuner shields the details of different unsupervised detectors and achieves lightweight feedback incorporation, only via a non-linear representation transformation.
We conducted extensive experiments on 8 public AD datasets to evaluate the effectiveness of our proposed method. The experimental results show that LMADA can achieve 74% F1-Score improvement on average, outperforming other comparative AAD approaches under the same feedback sample budget. In addition, we also validated that LMADA works well under various unsupervised anomaly detectors.
2 RELATED WORK AND MOTIVATION
In this section, we will give a brief introduction to the existing AAD work and analyze their limitations from two aspects: (1) sample selection and (2) feedback incorporation.
2.1 SAMPLE SELECTION
Most AAD approaches (Siddiqui et al., 2018; Das et al., 2017; Zha et al., 2020; Li et al., 2019; Das et al., 2016) adopt the top-selection strategy. The anomalous samples, that are not ranked on the top initially by the base detector, would have little chance to be selected for feedback, and therefore can hardly be recalled subsequently. We show a real example using KDD-99 SA1, which is a famous intrusion detection dataset. The dataset contains one normal class (96.7%) and 11 anomalous classes (3.3%) of various intrusion types. We applied the Isolation Forest (Liu et al., 2012) detector (a widely accepted one) to this dataset and found that the recall was around 0.28. We show the anomaly score distribution for the normal samples and three major intrusion types, respectively, in Fig. 2. Only the samples of two intrusion types, i.e., “neptune” and “satan”, are assigned high anomaly scores (0.60 ∼ 0.70). However, the samples of another major intrusion type “smurf” (accounts for 71.27% of all anomalous samples) are assigned relatively low anomaly scores (0.50 ∼ 0.55), which is even below the anomaly scores of many normal samples (4168 normal samples vs. 15 “smurf” anomalies were assigned anomaly scores over 0.55). Under this circumstance, selecting the top samples only for feedback can hardly improve the recall for the “smurf” type. In LMADA, we consider both anomaly scores as well as the diversity of samples during the sample selection. In this way, samples
1https://archive.ics.uci.edu/ml/machine-learning-databases/kddcup99-mld/kddcup.data.gz
not initially ranked on the top, like the “smurf” anomalies in our example, can have an opportunity to present to analysts.
2.2 FEEDBACK INCORPORATION
How to incorporate feedback information is another focus of AAD. Das et al.(Das et al., 2017) added a set of adjustable weights to the random projections generated by LODA detector (Pevnỳ, 2016), by which the feedback can be incorporated. They also modified Isolation Forest (Liu et al., 2012) by adding weights to the tree paths, re-weighting the isolation score based on the feedback (Das et al., 2016). Siddiqui et al.(Siddiqui et al., 2018) extended the re-weighting strategy to the Generalized Linear Anomaly Detectors (GLAD) with the help of online convex optimization (Hazan et al., 2016). iRRCF-Active (Wang et al., 2020) also borrowed the above similar idea into iRRCF detector (Guha et al., 2016). In summary, the above methods require tailoring the weights specific to the certain model structure of different unsupervised detectors and then adjusting the weights with feedback-labeled samples by gradient descent. However, it is impractical for such a diverse range of unsupervised detectors as the modification is sophisticated and case-by-case. In LMADA, we propose a model-agnostic method to incorporate feedback information, regardless of the type of unsupervised detectors.
We also note that some AAD approaches have been proposed and attempted to support arbitrary base detectors. Meta-AAD (Zha et al., 2020) first extracts a set of transferable features based on k-neighbors to labeled instances and feeds them into a pre-trained meta-policy model for detection. GAOD (Li et al., 2019) leverages label spreading (Zhou et al., 2003), a graph-based semi-supervised model, to iteratively spread label information to neighbors. In summary, both AAD methods leverage neighborhoods of labeled instances to exploit feedback information but require persisting the entire dataset for neighboring sample retrieval. Therefore, the final tuned detection model would become increasingly heavier and heavier. In this paper, the feedback incorporation of LMADA is achieved by only a non-linear transformation, which is lightweight enough for real-world application.
3 APPROACH
In this section, we will elaborate on the details about LMADA. Following the general AAD workflow shown in Fig.1, LMADA consists of two components, i.e., sample selector and model tuner. In the sample selector, we consider the diversity in addition to the anomaly scores when recommending valuable samples for labeling. In the model tuner, we proposed a model-agnostic strategy to incorporate feedback information for arbitrary unsupervised detectors. It is achieved in a lightweight manner, only relying on a simple non-linear transformation.
3.1 SAMPLE SELECTOR
As discussed in Sec. 2.1, sample selection of AAD should consider the diversity of the selected samples in addition to the anomaly scores. The diversity here is not in terms of anomaly scores but in the distribution of the samples. In summary, our attempt is to select a subset of samples with high anomaly scores, and meanwhile, are dissimilar from each other. We use the example shown in Fig. 3 to illustrate this idea. There are two types of anomalies A and B that stray from the majority of samples. The anomaly scores (based on the Isolation Forest) are indicated by the colors. The
deeper the color, the higher the anomaly score. The selected samples are indicated by the blue cross markers. The number of selected samples is fixed as 20. Type-B anomalies are assigned relatively lower anomaly scores compared with type-A because they are more adjacent to the normal samples.
If we use the top-selection strategy, the selected samples would mostly come from type-A (as shown in the left subfigure of Fig.3), which may not cover the other types of anomalies. Therefore, the feedback would not help the AAD to recall more anomalies, e.g., type-B in this example. The desired sample selection is shown in the right subfigure of Fig.3, where the selector achieves a good coverage for all samples with relatively high anomaly scores. In this way, we can enhance the anomaly scores of all anomaly types, instead of only those originally ranked high by the base detector.
Inspired by (Chen et al., 2018), we leverage a widely-adopted diversity sampling method, i.e., Determinantal Point Processes (DPP) (Kulesza et al., 2012), to achieve the above sampling target. We first introduce DPP in Sec. 3.1.1, and then describe how we balance the dual objectives, i.e., anomaly score and diversity, in Sec. 3.1.2.
3.1.1 DETERMINANTAL POINT PROCESSES (DPP)
The Determinantal Point Process (DPP) was originally introduced from fermion systems in thermal equilibrium (Macchi, 1975; Chen et al., 2018). Recently, it has been successfully applied to various machine learning tasks, e.g., image search (Kulesza & Taskar, 2011a), document summarization (Kulesza & Taskar, 2011b) and recommendation systems (Gillenwater et al., 2014). Given a dataset D = {s1, s2, ..., sn}, DPP aims to select a subset C from D. Specifically, DPP constructs a real positive semidefinite (PSD) kernel matrix L ∈ Rn×n derived from D. For each subset C ⊆ D, the probability of selecting C from D, denoted as P (C), is proportional to det(LC), where det(LC) is the determinantal value of the principal minor LC . The objective of DPP is to derive C∗ which maximizes the value of det(LC), shown in Eq.1. As an example, to achieve maximum diversity, the kernel matrix could be constructed as the pairwise similarity matrix (Kulesza et al., 2012).
C∗ = argmaxC⊆Ddet(LC) (1)
How to approximately solve this NP-hard problem (Ko et al., 1995) has been well studied in (Gillenwater et al., 2012; Han et al., 2017; Li et al., 2016; Chen et al., 2018) and we adopt the greedy algorithm proposed in (Chen et al., 2018) in our paper. We will introduce how to construct a specially tailored kernel matrix L for AAD in the next section.
3.1.2 KERNEL MATRIX CONSTRUCTION
In LMADA, we construct a kernel matrix L, whose entries can be formally written as Eq.2,
Lij = ⟨airisi, ajrjsj⟩ = aiajrirj ⟨si, sj⟩ (2)
where ai denotes the anomaly score uniformly re-scaled in the range of [0, 1]. It is used to motivate DPP to select samples with high anomaly scores. Meanwhile, we need to select diverse samples within and across feedback iterations. In each feedback iteration, the inner product ⟨si, sj⟩ measures the pairwise similarity of all candidate samples, based on which DPP prefers dissimilar samples (Kulesza et al., 2012). As there are multiple feedback iterations, we expect the samples selected in the current iteration are also different from those sampled in previous iterations. To achieve so, we maintain a data pool P preserving the selected samples from the previous feedback iterations. The minimum distance between a candidate sample si and the selected samples cached in P , is defined as the feedback repulsion score ri, as shown in Eq.3.
ri = min({1−⟨si, sk⟩ |∀sk ∈ P}) (3)
From Eq.2, we can conclude that det(LC) is proportional to aiajrirj and is inversely proportional to ⟨si, sj⟩ among the selected samples in C. In this way, it induces DPP to select more anomalous
(i.e., higher aiaj) data points that are not adjacent to the previously selected examples (i.e., higher rirj). Meanwhile, the data points are also distinguish enough from each other (i.e., lower ⟨si, sj⟩). The qualitative analysis can be referred to Appendix Sec.A.1.
Theoretically, the complexity of constructing L is O ( n2 ) , which is expensive for a large dataset. However, anomalous samples generally account for a small percentage of the whole dataset compared with the normal class (Zhao et al., 2019; Boukerche et al., 2020). For the instance in KDD99SA dataset introduced in Sec.2.1, only 3.3% of samples belong to anomalies. It is unnecessary to regard all samples as candidates for the sample selector. Consequently, we construct the kernel matrix with only the pre-truncated top α% samples ranked by their anomaly scores. In general, if α is small enough (e.g., < 3%), the selected samples would be those with the highest anomaly scores, i.e., similar to the top-selection. On the other hand, if α is large (e.g., > 30%), the selected samples would become too diverse to retrieve samples worthwhile for feedback. We will evaluate different α settings in Appendix Sec.A.8.
3.2 MODEL TUNER
After labeling the examples recommended by the sample selector, the model tuner focuses on how to incorporate newly labeled data points. The model tuner should be agnostic to the base unsupervised detectors. In other words, any unsupervised detection model can be easily integrated into our framework. To achieve this goal, we propose a three-phases model tuner in LMADA, as shown in Fig. 4. Firstly, we set up a neural network as the proxy model (Coleman et al., 2019) to mimic the behaviors of diverse base detectors. After that, a representation adjuster is added in front of the frozen proxy model to get trained based on the labeled samples. Finally, the tuned representation adjuster is used to transform the original samples into new representation vectors, which will be fed back to the base detector for re-scoring. The feedback continues for multiple iterations until the sampling budget is exhausted. The tuned representation adjuster can be applied as illustrated in the Phase-3 of Fig.4. Given a testing sample si, we first transform it into a new representation vector hi via the representation adjuster Ω (si). Then we directly feed hi into the base anomaly detector f and get the final detection results f (hi). In this way, LMADA achieves feedback incorporation in a lightweight manner, only with a non-linear representation transformation.
3.2.1 PROXY MODEL APPROXIMATION
As introduced in Sec. 2.2, unsupervised detectors of various types pose a great challenge to modelagnostic AAD. There are significant differences between the model structures of different unsupervised detectors. Most existing AAD work (Siddiqui et al., 2018; Das et al., 2017; 2016; Wang et al., 2020) needs to specifically modify the internal structure of unsupervised detectors.
To tackle this problem, we utilize a deep neural network as the proxy model to approximate the behaviors of diverse unsupervised detectors. In this way, we can turn unsupervised detectors into gradient-optimizable neural networks, which facilitate the subsequent representation adjuster tuning (more details presented in Sec.3.2.2). As shown in Phase-1 of Fig. 4, we use the normalized anomaly scores f(si) generated by the base detector as the pseudo-labels and set up a neural network Φ in parallel to fit them. The proxy model is composed of one input layer and multiple hidden layers followed by an output layer activated by the sigmoid function. The Mean-
Squared-Error (MSE) is adopted as the loss function during proxy model training, as shown in Lproxy = ∑b i=1 (Φ (si)− f (si)) 2, where b denotes the batch size.
After the proxy model training, the anomalous patterns that are captured by the base detectors have been learned by the proxy model, i.e., the proxy anomaly scores Φ (si) ≈ f (si). The key point here is that the internal structures of different unsupervised detectors do not need to be considered in this training process.
3.2.2 REPRESENTATION ADJUSTER TUNING
In Phase-2, we devise a representation adjuster Ω in front of the proxy model to incorporate the feedback information. The representation adjuster is a simple non-linear transformation layer, which takes the original sample vector si as the input and transforms it into a new feature space but with the same dimensions, i.e., hi = Ω(si) = sigmoid (Wsi),where hi ∈ Rd and si ∈ Rd. As shown in the middle of Fig.4, the transformed hi will be fed into the trained proxy model Φ and generate the proxy anomaly score Φ (hi). Based on that, W will be updated under the loss function in Eq.4. The representation adjuster can be trained by a gradient descent optimizer because the subsequent proxy model (as shown in Fig. 4) is also a neural network. The parameters of the proxy model are frozen during the representation adjuster tuning phase.
Ladjuster = Lfeedback + Lconsolidation + η (4)
Ladjuster is composed of three components, i.e., feedback loss, consolidation loss and a regularization item η. Lfeedback is used to fit the labeled samples in the data pool P , as shown in Eq.5, where yi represents the feedback label (+1 for the anomalous class and -1 for the normal class) for the sample si.
Lfeedback = − b∑
i=1
yi ∗ log (Φ (hi)) ,∀si ∈ P (5)
Training with only a few labeled samples would make the representation adjuster biased toward the feedback labels but ignore the patterns already learned from the base detector. So we design another component inspired by (Li & Hoiem, 2017), i.e., Lconsolidation , that serves for consolidating the knowledge of the base unsupervised detector, as shown in Eq.6. h̃i denotes the transformed sample representation in the last feedback iteration (h̃i = si in the first feedback iteration). It forces the proxy anomaly scores Φ (hi) of the remaining unlabeled samples to be stabilized around the original anomlay scores f ( h̃i ) in the newly transformed feature space. We note that Lconsolidation
is not conducive to fitting Lfeedback as the former tends to remain the original representation. To achieve a trade-off between them, we assign a weight for the consolidation loss of each sample. Intuitively, if an unlabeled sample si is similar to the labeled samples in the feedback data pool P , its consolidation loss should have a lower weight, reducing the constraints for fitting Lfeedback . On the contrary, those unlabeled samples, which are unlike the data points in P , should be assigned a higher weight to enhance the influence of the consolidation loss. This intuition is fully aligned with the feedback repulsion score ri introduced in Sec.3.1.2 and we thus use it as the weight of consolidation loss.
Lconsolidation = b∑
i=1
ri ∗ ( Φ (hi)− f ( h̃i ))2 ,∀si /∈ P (6)
The last component is the penalty for feature space transformation because the extremely dramatic change to the original sample vectors is undesired. To achieve so, we set η as ∑b i=1 ||hi − si||2. More training details for the representation adjuster can be found in Appendix Sec.A.2.
4 EXPERIMENT
4.1 DATASETS AND SETTINGS
We evaluated our proposed method on 8 public datasets, including PageBlocks, Annthyroid, Cardio, Cover, KDD99-Http, Mammography, KDD99-SA, Shuttle, which are widely used by existing AAD
approaches (Siddiqui et al., 2018; Zha et al., 2020; Li et al., 2019; Das et al., 2017; 2019). The details of these datasets can be found in Appendix Sec. A.3. We run 5 feedback iterations and query 20 samples in each iteration. Same as the existing work, we used simulation instead of real user feedback since all the ground truth is known for these public datasets. The experimental environment and the parameters setting can be found in Appendix Sec. A.4 and Sec. A.5, respectively.
4.2 COMPARISON METHODS AND METRICS
We compared LMADA with three state-of-the-art AAD methods, i.e., FIF (Siddiqui et al., 2018), Meta-AAD (Zha et al., 2020), and GAOD (Li et al., 2019). FIF adds a set of weights to the tree branches of the Isolation Forest detector and tunes them via online convex optimization with feedback information. GAOD utilizes the semi-supervised method (label spreading Zhou et al. (2003)) to consume user feedback. Both of the above approaches adopt the top-selection strategy. Meta-AAD extracts a set of transferable features for a pre-trained meta-policy detection model, considering both long-term and short-term benefits in querying feedback.
We use F1-Score Curve to evaluate the effectiveness of different AAD methods. Specifically, we calculate F1-Score on the entire dataset after finishing an iteration of feedback. Besides, we also calculate the Area-Under-Curve (AUC) (Ling et al., 2003) of the F1-Score Curve.
4.3 COMPARISON EXPERIMENT RESULTS
We compared our proposed method with three state-of-the-art AAD approaches and the results are illustrated in Fig. 5. For fairness, we used Isolation Forest as the base detector because it was adopted by all the comparison methods (Zha et al., 2020; Siddiqui et al., 2018; Li et al., 2019). To ensure reproducibility, we repeated our experiments 10 times on each dataset and plotted the average F1Score and the standard error bar (Altman & Bland, 2005). The AUC value of each F1-Score Curve is shown in the legend.
From the results, we can confirm that LMADA performs better than other AAD methods. With 20 feedback samples per iteration, LMADA achieved consistently higher F1-Score on most datasets. Especially on KDD99-SA, Cover, and Cardio datasets, LMADA boosted the F1-Score of the base detector by an average of 144% to 0.80+ after 5 feedback iterations. For PageBlocks, Annthyroid, and Mammography datasets, LMADA also increased the F1-Score by 60% on average, significantly outperforming other AAD models. As for the KDD99-Http and Shuttle dataset, we can see that the initial performance of the base detector has reached a relatively high level. Under this circumstance, LMADA also can hold a high detection accuracy, exhibiting its robustness.
Among the comparison methods, Meta-AAD performed much better than the other two because it utilizes reinforcement learning to learn a meta-policy for feedback querying, rather than simply picking up the samples with the highest anomaly scores. However, the diversity of samples is not taken into account explicitly, resulting in relatively lower performance compared with LMADA (e.g. 0.29 AUC of Meta-AAD vs. 0.87 AUC of LMADA in KDD99-SA dataset). FIF and GAOD even had difficulty preserving the upward trend of their F1-Score curves, although more feedback samples were added. As we discussed in Sec.2.1, the top-selection strategy of both methods hinders the exploration of the lower-ranked anomalous samples. Moreover, their detectors were tuned to over-fit the scarce feedback-labeled samples, leading to a decreasing recall. We have verified this in Appendix Sec. A.9.
4.4 MODEL-AGNOSTIC EVALUATION
We target to propose a model-agnostic AAD approach, which can be easily extended to arbitrary unsupervised detectors. As such, we evaluated the effectiveness of LMADA under five different but commonly-used unsupervised detectors, including AutoEncoder (Vincent et al., 2010), PCA (Shyu et al., 2003), OCSVM (Schölkopf et al., 2001), LODA (Pevnỳ, 2016; Das et al., 2016), and IF. The experimental settings are the same as that in Sec.4.3 and the results are shown in Fig. 6.
From these figures, we can conclude that LMADA works well on different unsupervised detectors. It can consistently improve the F1-Score on all eight datasets whatever the base detector is adopted. More than that, we also found that the performance gains achieved by LMADA vary with different unsupervised detectors. Taking the KDD99-Http dataset as an example, we can see that LODA performs much worse than the other base detectors at the beginning (F1-Score 0.02 compared to ∼0.82 of the other detectors). Even so, LMADA was also able to improve the performance of LODA from 0.02 to 0.96 after 5 iterations. We also noted that the variance of its results is significantly larger than the others. The reason is that LODA is inaccurate and unstable on KDD99-Http dataset, making it difficult to provide effective information for the sample selector and the model tuner. These experiment results confirm that the initial performance of base detectors has a great influence to AAD approaches.
4.5 SAMPLE SELECTOR VALIDATION
In this section, we validated the effectiveness of our proposed sample selector in LMADA. As we discussed in Sec. 2.1, diversity plays a critical role in AAD. In order to verify this point, we conducted an ablation study on the KDD99-SA dataset. In this dataset, 11 anomalous classes and the normal class are well annotated separately so that we can study how samples would be selected by different sampling strategies. We compared our proposed sampling method with the commonly-used
top-selection strategy (Das et al., 2017; 2016; Siddiqui et al., 2018), and the stratified sampling described in (Guha et al., 2016) (i.e., divide samples into g groups based on their anomaly scores and then select examples randomly from each group). The model tuner is fixed. The selected anomalous classes under these settings and their corresponding improved F1-Scores are shown in Fig. 7(a) and Fig. 7(b), respectively.
From Fig.7(a), we can see that the sample selector of LMADA is able to cover more anomaly classes, compared with the other two sampling strategies. Furthermore, we also confirm the necessity of the diversity-aware selection from Fig.7(b) since our sample selector achieved much higher F1-Scores than those under the top-selection or the stratified sampling methods. For example, in the first feedback iteration, our proposed sample selector chose “smurf” samples (shown in blue color) for feedback, which were missed by the other two. As we stated in Sec.2.1, “smurf” samples were not assigned high anomaly scores by the base detector (IF) but they actually account for 71.27% of all anomalies. Therefore, we can see that F1-Score can be significantly improved from 0.28 to 0.94 with labeled “smurf” anomalies, while the other two strategies failed to achieve this high F1-Score. The complete results on all datasets can be found in Appendix Sec. A.6.
4.6 MODEL TUNER VALIDATION
In this section, we will present the effectiveness of our proposed model tuner. As introduced in Sec.3.2, the transformed representations hi are trained based on the proxy model but will be fed back to the base unsupervised detector to get the final anomaly scores. We aim to study how large the difference between the anomaly scores generated by the base detector f (hi) and the proxy model Φ (hi), respectively. We also conducted this ablation experiment on the KDD99-SA dataset and the results are exhibited in Fig. 7(c).
This figure shows that there is only a narrow gap in F1-Scores between the proxy model (green line) and the base unsupervised detector (red line). It manifests that the proxy model has captured the knowledge learned by the base detection method as they produced similar anomaly scores. As such, the transformed representations hi trained via the proxy model can be smoothly transferred to the base unsupervised detector. The complete experimental results on all datasets can be referred to Appendix Sec. A.7.
5 CONCLUSION
In this paper, we propose LMADA, a lightweight, model-agnostic and diversity-aware active anomaly detection method. In the sample selector of LMADA, we take the anomaly scores as well as the diversity of samples into account, unlike most existing AAD work that solely picks the most anomalous ones for feedback querying. In the model tuner of LMADA, we propose a model-agnostic strategy to incorporate feedback information, regardless of the type of unsupervised detector. It can be achieved by a lightweight non-linear transformation. Through the extensive evaluation on 8 public AD datasets, we show that LMADA can achieve 74% F1-Score improvement on average, significantly outperforming other comparative AAD approaches.
A APPENDIX
A.1 THE QUALITATIVE ANALYSIS OF EXTENDED DPP IN SAMPLE SELECTOR
The kernel matrix L is shown as Eq.2. As introduced in Sec.3.1.1, we aim to select a subset C with highest det (LC). The principal minor LC is as follows. a21r 2 1 ⟨s1, s1⟩ · · · a1ajr1rj ⟨s1, sj⟩ · · · a1a|C|r1r|C| 〈 s1, s|C| 〉 a2a1r2r1 ⟨s2, s1⟩ · · · a2ajr2rj ⟨s2, sj⟩ · · · a2a|C|r2r|C| 〈 s2, s|C| 〉 ... . . . ... . . . ... aia1rir1 ⟨si, s1⟩ · · · aiajrirj ⟨si, sj⟩ · · · aia|C|rir|C| 〈 si, s|C| 〉 ... . . . ... . . . ...
a|C|a1r|C|r1 ⟨sc, s1⟩ · · · a|C|ajr|C|rj 〈 s|C|, sj 〉 · · · a2|C|r 2 |C| 〈 s|C|, s|C| 〉
(7)
The det (LC) can be calculated in Eq 8. det (LC) = ∑ (−1)τ(p1,p2,...p|C|) L1p1L2p2 · · ·L|C|p|C| (8)
where p1, p2, ...p|C| denote all permutations of {1, 2, . . . |C|}, and τ ( p1, p2, . . . p|C| ) represents the reverse order number of p1, p2, ...p|C|. According to Eq.2, det (LC) can be further expanded as Eq. 9
det (LC) = |C|∏ i=1 a2i r 2 i ∑ (−1)τ(p1,p2,...p|C|) ⟨s1, sp1⟩ ⟨s2, sp2⟩ · · · 〈 s|C|, sp|C| 〉 (9)
= |C|∏ i=1 a2i r 2 i · ∣∣∣det([s1, s2, ...s|C|]⊤ [s1, s2, ...s|C|])∣∣∣ (10) =
|C|∏ i=1 a2i r 2 i · ( s1 ⊗ s2 ⊗ ...⊗ s|C| )2 = |C|∏ i=1 a2i r 2 i · V 2 (11)
|C|∏ i=1 a2i r 2 i is the common factor extracted from det (LC). As such, we can conclude that det (LC) is proportional to ai and ri, inducing DPP to select samples that have high anomaly scores and are different from those have already been selected in the data pool P .
The second term, ∑ (−1)τ(p1,p2,...p|C|) ⟨s1, sp1⟩ ⟨s2, sp2⟩ · · · 〈 s|C|, sp|C| 〉 , can be further rewrote
as the exterior product form ( s1 ⊗ s2 ⊗ ...⊗ s|C| )2 shown in Eq.11. According to the definition of exterior product (Browne, 2012), it geometrically represents the volume V of the parallel polyhedron spanned by vectors {s1, s2, ...s|C|}. Consequently, the more dissimilar they are, the larger the volume V of the spanned polyhedron is, the larger det (LC) is.
A.2 LABELED SAMPLES OVERSAMPLING
In the model tuner, we use the labeled samples to train the representation adjuster. Nevertheless, compared to the unlabeled samples, the feedback-labeled samples only account for a tiny percentage of the overall dataset (e.g., 20 samples per iteration vs. 286048 samples in total of the Cover dataset). Therefore, we need to over-sample the labeled samples in each training batch to improve the utilization of such a few feedback samples, so that we can fully exploit the feedback information and accelerate the loss convergence. Half of each training batch are labeled samples, which are repeatedly drawn from the data pool P , and the other half are unlabeled samples, which are randomly sampled from the all unlabeled samples.
A.3 DATASETS INFORMATION
We used eight public datasets for the evaluation. PageBlocks, Annthyroid, Cardio, Cover, Mammography, Shuttle are available in ODDS 2. KDD99-Http and KDD99-SA are available in UCI Machine
2http://odds.cs.stonybrook.edu/
Datasets Samples Dimension Anomaly Number Anomaly Rate
Learning Repository3. PageBlocks can be referred to ADBench 4. The detailed information of these datasets is shown in Table.1. The number of samples ranges from 1.8K to 286K and the anomaly rate is spanning from 0.96% to 9.61%.
A.4 EXPERIMENT ENVIRONMENT
We built LMADA based on PyTorch 1.12.0 (Paszke et al., 2019) and used base unsupervised anomaly detectors implemented in PyOD 1.0.3 (Zhao et al., 2019). In our experiments, we set up a Virtual Machine (VM) with 64 Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz processors and 256GB RAM. The operating system is Ubuntu-20.04. In the VM, we had an NVIDIA Tesla M40 GPU with CUDA 11.4 for deep learning model training.
A.5 EXPERIMENT SETTING DETAILS
LMADA: For the sample selector of LMADA, we set the pre-truncation rate α = 10%. We introduce two hyper-parameters λ and γ to adjust the preference of anomaly score and diversity (Lij = (aiaj) λ (rirj ⟨si, sj⟩)γ). In the experiments, we set λ = 1 and γ = 1. In the model tuner, we utilized the Adam optimizer (Kingma & Ba, 2014) and set the epoch number to 10, the learning rate to 0.01, and the batch size to 512, for both the proxy model approximation phase and the representation adjuster tuning phase. The size of the proxy model hidden layer is set to 64. Specifically for SA dataset, we performed dimension reduction (Carreira-Perpinán, 1997) on it because it is characterized by its high feature dimensions and sparsity.
Meta-AAD: We used the source code available in the link provided by the original paper5. We utilized 12 datasets (including toy, yeast, glass, ionosphere, lympho, pima, thyroid, vertebral, vowels, wbc, wine, yeast) for meta-policy training in our experiment. All the datasets are available in the released code repository 6. After that, we directly applied the trained meta-policy to the targeted 8 public datasets. We borrowed the the default settings from the original paper in our experiments: rollout steps T = 128, entropy coefficient c2 = 0.01, learning rate lr = 2.5× 10−4, value function coefficient c1 = 0.5, λ = 0.95, clip range ϵ = 0.2, balance parameter γ = 0.6.
FIF: We used the source code released in the link provided by the original paper 7. We chose the Log-Likelihood loss function for FIF in the experiment. We set the type of regularizer w = 2 and the learning rate a = 1.
GAOD: We implemented GAOD according to Li et al. (2019) by ourselves because lacking the released source code. We set the number of nearest neighbors k = 30 and the learning rate of label spreading α = 0.995. The standard deviation of Gaussian function σ is set to half of the 95-percentile of k-th nearest neighbor distances.
3https://archive.ics.uci.edu/ml/machine-learning-databases/kddcup99-mld/kddcup.data.gz 4https://github.com/Minqi824/ADBench 5https://github.com/daochenzha/Meta-AAD 6https://github.com/daochenzha/Meta-AAD/tree/master/data 7https://github.com/siddiqmd/FeedbackIsolationForest
We note that the pairwise distance matrix is required for Meta-AAD and GAOD (for neighborhood retrieval). As such, both approaches would fail to work under large data volume due to the high space complexity (O(n2)). Taking the largest dataset Cover as an example (shown in Table.1), the pairwise distance matrix would consume 610 GB memory in theory, which would trigger the OutOf-Memory (OOM) problem in our experiment environment. Therefore, we only keep the top 50% and 20% samples for KDD99-SA and Cover, respectively, based on the anomaly scores produced by the base detector. Only these samples are involved in the feedback incorporation of Meta-AAD and GAOD.
A.6 THE COMPLETE RESULTS OF SAMPLE SELECTOR VALIDATION
We illustrated the sample selector validation results on all 8 datasets in Fig.8. Our sampling strategy outperforms other sampling methods on most datasets. Compared with the results of FIF and GAOD shown in Fig.5, we also found that our proposed method still achieved much better F1-Scores even using the top-selection strategy in the same manner. It confirms the effectiveness of our proposed model tuner on the other side.
A.7 THE COMPLETE RESULTS OF MODEL TUNER VALIDATION
We show the model tuner validation results on all eight datasets in Fig.9. From these figures, we confirm the conclusion in Sec. 4.6. The proxy model has captured the knowledge learned by the base detection method as they produced similar anomaly scores. As such, the transformed representation hi can be directly fed into the base detector.
A.8 EFFECTIVENESS OF PRE-TRUNCATION IN SAMPLE SELECTOR
In Sec. 3.1.2, we introduced the pre-truncation to improve the sampling efficiency. In this section, we aim to validate its effectiveness in the sample selector. Specifically, we adjusted α from 1% to 60%. We recorded the running time and its corresponding AUC of F1-Score Curve under different α values, which are shown in Fig.10. From the left figure of Fig. 10, we can draw a conclusion that the running time can be significantly reduced by more pre-truncation. For example, the running time can be saved in half if we adjust α from 50% to ∼6%. Moreover, from the right figure of Fig. 10, we can see that the AUC of F1-Score arises when α < 10% and then gradually drops when we keep increasing α. As we have discussed in Sec. 3.1.2, it manifests that either a too broad or a too narrow set of candidate samples leads to suboptimal feedback querying. Generally speaking, we set α around the estimated contamination ratio, such as 10%.
A.9 EXPLOARATION OF OVER-FITTING PROBLEM
In Sec.4.3, we found that the comparison methods performed much worse than LMADA. From the feedback incorporation perspective, it is caused by the overfitting to the few top-ranked samples (see Sec.1). To verify this point, we take GAOD as an example and gradually increase the number of querying samples in each feedback iteration to mitigate the overfitting problem. We rerun GAOD on three datasets (PageBlocks, Shuttle and KDD99-SA), where it did not perform well. According to the settings described in the original GAOD paper, the size of the data pool should be set to 2 × #outliers (Li et al., 2019). Therefore, we enlarged the data pool size spanning from 0.5 to 2 × #outliers by a stride of 0.5. From the results shown in Fig. 11, we see that GAOD can only achieve improvements in F1-Score with at least 0.5× #outliers (e.g., the number of queried samples reaches 168 per iteration in KDD99-SA dataset, which is far beyond our proposed approach with 20 per iteration). Therefore, it requires a significantly larger labeling effort.
A.10 QUERY NUMBER EXPLORATION
We conducted the comparison experiment under different query numbers per feedback iteration (1, 5, 10, 20) on KDD-SA dataset, which can be found in Fig.12. From the figure, we can see that LMADA can achieve a consistent performance improvement, even with only 1 sample per iteration. On the contrary, the F1-Scores of FIF/GAOD/Meta-AAD fail to increase because they only select the top-ranked samples for updating the model, ignoring the low-ranked anomaly samples, such as the “smurf” type (as we presented in Sec.2.1).
A.11 ADDITIONAL EXPERIMENT
We add the experimental results of the top-random query strategy in Fig.13, which represents a random selection from samples with high anomaly scores. From the results, we can conclude that our sampling method significantly outperforms the top-random on PageBlocks, Cardio, Cover, Mammography, KDD99-SA datasets and achieve similar performance on Annthyroid, KDD99-Http, and Shuttle datasets. Moreover, it is worth noting that the variance of the top-random strategy is much larger than that of ours. | 1. What is the focus and contribution of the paper on active anomaly detection?
2. What are the strengths of the proposed approach, particularly in terms of diversity-aware sample selection and lightweight feedback incorporation?
3. What are the weaknesses of the paper regarding its sampling method, sensitivity to hyperparameters, and limitations in comparisons with other methods?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposed a lightweight, model-agnostic and diversity-aware active anomaly detection method. It takes the diversity of samples into account and designs a diversity-aware sample selector powered by determinantal point process. It proposes a model-agnostic strategy to incorporate feedback information, which approximates diverse unsupervised detectors with a unified proxy model.
Strengths And Weaknesses
Strength: 1. It considers the diversity of samples instead of anomaly score top-selection strategy for the feedback querying, preventing the over-fitting situation to the top-ranked samples 2. The proposed method shields the details of different unsupervised detectors and achieves lightweight feedback incorporation, only via a non-linear representation transformation. 3. It conducts extensive experiments on eight public datasets and outperforms other comparative AAD approaches
Weaknesses: 1. The proposed sampling method is somewhat incremental as it simply uses an existing method, DPP, to select samples. The authors may need to re-summarize the main contributions of the paper. The contributions are not clear. 2. The proposed sampling method making the pre-truncated ratio alpha too important. It takes efforts to select such alpha technically by extensive experiments. And such alpha may differ in various dataset. For example, as shown in Appendix. A.8, when the ratio changed from 10% to 8%, the performance will decrease by 5%, which is too sensitive. 3. It lacks the comparison of different sampling methods from selecting samples from the top alpha samples, like simple random sampling. I mean, the proposed method uses DPP to select diverse samples from top 10% anomaly data, but simply using random sampling to select samples from the same top 10% anomaly data can also guarantee the data diversity. 4. The experiments of LAMDA under different base unsupervised detectors only consider classical unsupervised detectors like PCA, OCSVM. When using deep model, I am not sure whether the proxy model could mimic the detector or the cost in time and memory will be too large. 5. This paper uses a proxy model to imitate different anomaly detectors. Authors claim that it is model agnostic. However, when using different anomaly detectors, the learned parameters in the proxy models are different. Thus, I think it is not model agnostic.
Clarity, Quality, Novelty And Reproducibility
The writing is clear. The idea of considering diversity samples is good but it simply uses the existing DPP to solve the problem. It is hard to reproduce the results because there is no source codes or network details provided in this paper. |
ICLR | Title
Unrestricted Adversarial Examples via Semantic Manipulation
Abstract
Machine learning models, especially deep neural networks (DNNs), have been shown to be vulnerable against adversarial examples which are carefully crafted samples with a small magnitude of the perturbation. Such adversarial perturbations are usually restricted by bounding their Lp norm such that they are imperceptible, and thus many current defenses can exploit this property to reduce their adversarial impact. In this paper, we instead introduce “unrestricted” perturbations that manipulate semantically meaningful image-based visual descriptors – color and texture – in order to generate effective and photorealistic adversarial examples. We show that these semantically aware perturbations are effective against JPEG compression, feature squeezing and adversarially trained model. We also show that the proposed methods can effectively be applied to both image classification and image captioning tasks on complex datasets such as ImageNet and MSCOCO. In addition, we conduct comprehensive user studies to show that our generated semantic adversarial examples are photorealistic to humans despite large magnitude perturbations when compared to other attacks.
N/A
Machine learning models, especially deep neural networks (DNNs), have been shown to be vulnerable against adversarial examples which are carefully crafted samples with a small magnitude of the perturbation. Such adversarial perturbations are usually restricted by bounding their Lp norm such that they are imperceptible, and thus many current defenses can exploit this property to reduce their adversarial impact. In this paper, we instead introduce “unrestricted” perturbations that manipulate semantically meaningful image-based visual descriptors – color and texture – in order to generate effective and photorealistic adversarial examples. We show that these semantically aware perturbations are effective against JPEG compression, feature squeezing and adversarially trained model. We also show that the proposed methods can effectively be applied to both image classification and image captioning tasks on complex datasets such as ImageNet and MSCOCO. In addition, we conduct comprehensive user studies to show that our generated semantic adversarial examples are photorealistic to humans despite large magnitude perturbations when compared to other attacks.
1 INTRODUCTION
Machine learning (ML), especially deep neural networks (DNNs) have achieved great success in various tasks, including image recognition (Krizhevsky et al., 2012; He et al., 2016), speech processing (Hinton et al., 2012) and robotics training (Levine et al., 2016). However, recent literature has shown that these widely deployed ML models are vulnerable to adversarial examples – carefully crafted perturbations aiming to mislead learning models (Carlini & Wagner, 2017; Kurakin et al., 2016; Xiao et al., 2018b). The fast growth of DNNs based solutions demands in-depth studies on adversarial examples to help better understand potential vulnerabilities of ML models and therefore improve their robustness.
To date, a variety of different approaches has been proposed to generate adversarial examples (Goodfellow et al., 2014b; Carlini & Wagner, 2017; Kurakin et al., 2016; Xiao et al., 2018a); and many of these attacks search for perturbation within a bounded Lp norm in order to preserve their photorealism. However, it is known that the Lp norm distance as a perceptual similarity metric is not ideal (Johnson et al., 2016; Isola et al., 2017). In addition, recent work show that defenses trained on Lp bounded perturbation are not robust at all against new types of unseen attacks (Kang et al., 2019). Therefore, exploring diverse adversarial examples, especially those with “unrestricted" magnitude of perturbation has acquired a lot of attention in both academia and industries (Brown et al., 2018).
Recent work based on generative adversarial networks (GANs) (Goodfellow et al., 2014a) have introduced unrestricted attacks (Song et al, 2018). However, these attacks are limited to datasets like MNIST, CIFAR and CelebA, and are usually unable to scale up to bigger and more complex datasets such as ImageNet. Xiao et al. (2018b) directly manipulated spatial pixel flow of an image to produce adversarial examples without Lp bounded constraints on the perturbation. However, the attack does not explicitly control visual semantic representation. More recently, Hosseini & Poovendran (2018) manipulated hue and saturation of an image to create adversarial perturbations. However, these examples are easily distinguishable by human and are also not scalable to complex datasets.
∗ indicates equal contributions.
In this work, we propose unrestricted attack strategies that explicitly manipulate semantic visual representations to generate natural-looking adversarial examples that are “far” from the original image in tems of the Lp norm distance. In particular, we manipulate color (cAdv) and texture (tAdv) to create realistic adversarial examples (see Fig 1). cAdv adaptively chooses locations in an image to change their colors, producing adversarial perturbation that is usually fairly substantial, while tAdv utilizes the texture from other images and adjusts the instance’s texture field using style transfer.
These semantic transformation-based adversarial perturbations shed light upon the understanding of what information is important for DNNs to make predictions. For instance, in one of our case studies, when the road is recolored from gray to blue, the image gets misclassified to tench (a fish) although a car remains evidently visible (Fig. 2b). This indicates that deep learning models can easily be fooled by certain large scale patterns. In addition to image classifiers, the proposed attack methods can be generalized to different machine learning tasks such as image captioning ( Karpathy & Fei-Fei (2015)). Our attacks can either change the entire caption to the target (Chen et al., 2017; Xu et al., 2019) or take on more challenging tasks like changing one or two specific target words from the caption to a target. For example, in Fig. 1, “stop sign” of the original image caption is changed to “cat sitting” and “umbrella is” for cAdv and tAdv respectively.
To ensure our “unrestricted" semantically manipulated images are natural, we conducted extensive user studies with Amazon Mechanical Turk. We also tested our proposed attacks on several state of the art defenses. Rather than just showing the attacks break these defenses (better defenses will come up), we aim to show that cAdv and tAdv are able to produce new types of adversarial examples. Experiments also show that our proposed attacks are more transferable given their large and structured perturbations (Papernot et al., 2016). Our semantic adversarial attacks provide further insights about the vulnerabilities of ML models and therefore encourage new solutions to improve their robustness.
In summary, our contributions are: 1) We propose two novel approaches to generate “unrestricted" adversarial examples via semantic transformation; 2) We conduct extensive experiments to attack both image classification and image captioning models on large scale datasets (ImageNet and MSCOCO); 3) We show that our attacks are equipped with unique properties such as smooth cAdv perturbations and structured tAdv perturbations. 4) We perform comprehensive user studies to show that when compared to other attacks, our generated adversarial examples appear more natural to humans despite their large perturbations; 5) We test different adversarial examples against several state of the art defenses and show that the proposed attacks are more transferable and harder to defend.
2 COLORIZATION ATTACK (cADV)
Background. Image Colorization is the task of giving natural colorization to a grayscale image. This is an ill-posed problem as there are multiple viable natural colorizations given a single grayscale
image. Deshpande et al. (2017) showed that diverse image colorization can be achieved by using an architecture that combines VAE (Kingma & Welling (2013)) and Mixture Density Network; while Zhang et al. (2017) demonstrated an improved and diverse image colorization by using input hints from users guided colorization process.
Our goal is to adversarially color an image by leveraging a pretrained colorization model. We hypothesize that it is possible to find a natural colorization that is adversarial for a target model (e.g., classifier or captioner) by searching in the color space. Since a colorization network learns to color natural colors that conform to boundaries and respect short-range color consistency, we can use it to introduce smooth and consistent adversarial noise with a large magnitude that looks natural to humans. This attack differs from common adversarial attacks which tend to introduce short-scale high-frequency artifacts that are minimized to be invisible for human observers.
We leverage Zhang et al. (2016; 2017) colorization model for our attack. In their work, they produce natural colorizations on ImageNet with input hints from the user. The inputs to their network consist of the L channel of the image in CIELAB color space XL ∈ RH×W×1, the sparse colored input hints Xab ∈ RH×W×2, and the binary mask M ∈ BH×W×1, indicating the location of the hints. cAdv Objectives. There are a few ways to leverage the colorization model to achieve adversarial objectives. We experimented with two main methods and achieved varied results.
Network weights. The straightforward approach of producing adversarial colors is to modify Zhang et al. (2017) colorization network, C directly. To do so, we simply update C by minimizing the adversarial loss objective Jadv, which in our case, is the cross entropy loss. t represents target class and F represents the victim network.
θ∗ = argmin θ Jadv(F(C(XL, Xab,M ; θ)), t) (1)
Hints and mask. We can also vary input hints Xab and mask M to produce adversarial colorizations. Hints provides the network with ground truth color patches that guides the colorization, while the mask provides its spatial location. By jointly varying both hints and mask, we are able to manipulate the output colorization. We can update the hints and mask as follows:
M∗, X∗ab = argmin M,Xab Jadv(F(C(XL, Xab,M ; θ)), t) (2)
cAdvAttack Methods. Attacking network weights allows the network to search the color space with no constraints for adversarial colors. This attack is the easiest to optimize, but the output colors are not realistic as shown in Fig. 2. Our various strategies outlined below are ineffective as the model learns to generate the adversarial colors without taking into account color realism. However, colorizations produced often correlate with colors often observed in the target class. This suggests that classifiers associate certain colors with certain classes which we will discuss more in our case study.
Attacking input hints and mask jointly gives us natural results as the pretrained network will not be affected by our optimization. Attacking hints and mask separately also works but takes a long optimization time and give slightly worse results. For our experiments, we use Adam Optimizer (Kingma & Ba (2014)) with a learning rate of 10−4 in cAdv. We iteratively update hints and mask until our adversarial image reaches the target class and the confidence change of consecutive iterations does not exceed a threshold of 0.05.
Control over colorization. Current attack methods lack control over where the attack occurs, opting to attack all pixels indiscriminately. This lack of control is not important for most attacks where the is small but is concerning in cAdv where making unstructured large changes can be jarring. To produce realistic colorization, we need to avoid making large color changes at locations where colors are unambiguous (e.g. roads in general are gray) and focus on those where colors are ambiguous (e.g. an umbrella can have different colors). To do so, we need to segment an image and determine which segments should be attacked or preserved.
To segment the image into meaningful areas, we cluster the image’s ground truth AB space using K-Means. We first use a Gaussian filter of σ = 3 to smooth the AB channels and then cluster them into 8 clusters. Then, we have to determine which cluster’s colors should be preserved. Fortunately, Zhang et al. (2017) network output a per-pixel color distribution for a given image which we used to calculate the entropy of each pixel. The entropy represents how confident the network is at assigning a color at that location. The average entropy of each cluster represents how ambiguous their color is. We want to avoid making large changes to clusters with low-entropy while allowing our attack to change clusters with high entropy. One way to enforce this behavior is through hints, which are sampled from the ground truth at locations belonging to clusters of low-entropy. We sample hints from the k clusters with the lowest entropy which we refer as cAdvk (e.g. cAdv2 samples hints from the 2 lowest entropy clusters).
Number of input hints. Network hints constrain our output to have similar colors as the ground truth, avoiding the possibility of unnatural colorization at the cost of color diversity. This trade-off is controlled by the number of hints given to the network as initialization (Fig. 4). Generally, providing more hints gives us similar colors that are observed in original image. However, having too many hints is also problematic. Too many hints makes the optimization between drawing adversarial colors and matching local color hints difficult. Since the search space for adversarial colors is constrained because of more hints, we may instead generate unrealistic examples.
Number of Clusters. The trade-off between the color diversity and the color realism is also controlled by the number of clusters we sample hints from as shown in Fig. 3. Sampling from multiple clusters gives us realistic colors closer to the ground truth image at the expense of color diversity.
Empirically, from our experiments we find that in terms of color diversity, realism, and robustness of attacks, using k = 4 and 50 hints gives us better adversarial examples. For the rest of this paper, we fix 50 hints for all cAdvk methods.
3 TEXTURE ATTACK (tADV)
Background. Texture transfer extracts texture from one image and adds it to another. Transferring texture from one source image to another target image has been widely studied in computer vision ( Efros & Freeman (2001); Gatys et al. (2015)). The Convolutional Neural Network (CNN) based texture transfer from Gatys et al. (2015) led to a series of new ideas in the domain of artistic style transfer ( Gatys et al. (2016); Huang & Belongie (2017); Li et al. (2017); Yeh et al. (2019)). More recently, Geirhos et al. (2018) showed that DNNs trained on ImageNet are biased towards texture for making predictions.
Our goal is to generate adversarial examples by infusing texture from another image without explicit constraints on Lp norm of the perturbation. For generating our tAdv examples, we used a pretrained VGG19 network (Simonyan & Zisserman, 2014) to extract textural features. We directly optimize our victim image (Iv) by adding texture from a target image (It). A natural strategy to transfer texture is by minimizing within-layer feature correlation statistics (gram matrices) between two images Gatys et al. (2015; 2016). Based on Yeh et al. (2019), we find that optimizing cross-layer gram matrices instead of within-layer gram matrices helps produce more natural looking adversarial examples. The difference between the within-layer and the cross-layer gram matrices is that for a within-layer, the feature’s statistics are computed between the same layer. For a cross-layer, the statistics are computed between two adjacent layers.
tAdv Objectives. tAdv directly attacks the image to create adversarial examples without modifying network parameters. Moreover, there is no additional content loss that is used in style transfer methods (Gatys et al. (2016); Yeh et al. (2019)). Our overall objective function for the texture attack contains a texture transfer loss (LAt ) and an cross-entropy loss (Jadv).
LAtAdv = αL A t (Iv, It) + βJadv(F(Iv), t) (3)
Unlike style transfer methods, we do not want the adversarial examples to be artistically pleasing. Our goal is to infuse a reasonable texture from a target class image to the victim image and fool a classifier or captioning network. To ensure a reasonable texture is added without overly perturbing the victim image too much, we introduce an additional constraint on the variation in the gram matrices of the victim image. This constraint helps us to control the image transformation procedure and prevents it from producing artistic images. Let m and n denote two layers of a pretrained VGG-19 with a decreasing spatial resolution and C for number of filter maps in layer n, our texture transfer loss is then given by
LAt (Iv, It) = ∑
(m,n)∈L
1
C2 ∑ ij ∥∥Gm,nij (Iv)−Gm,nij (It)∥∥2 std ( Gm,nij (Iv) ) (4)
Let f be feature maps, Ufn be an upsampled fn that matches the spatial resolution of layer m. The cross layer gram matrices G between the victim image (Iv) and a target image (It) is given as
Gm,nij (I) = ∑ p [ fmi,p(I) ] [ Ufnj,p(I) ]T (5) Texture Transfer. To create tAdv adversarial examples, we need to find images to extract the texture from, which we call “texture source” (Ts). A naive strategy is to randomly select an image from the data bank as Ts. Though this strategy is successful, their perturbations are clearly perceptible. Alternatively, we can randomly select Ts from the adversarial target class. This strategy produces less perceptible perturbations compared to the random Ts method as we are extracting a texture from the known target class. A better strategy to select Ts is to find a target class image that lies closest to the victim image in the feature space using nearest neighbors. This strategy is sensible as we assure our victim image has similar feature statistics as our target image. Consequently, minimizing gram matrices is easier and our attack generates more natural looking images (see Fig. 5).
For texture transfer, we extract cross-layer statistics in Eq. 4 from the R11, R21, R31, R41, and R51 of a pretrained VGG19. We optimize our objective (Eq. 3) using an L-BFGS (Liu & Nocedal (1989)) optimizer. tAdv attacks are sensitive and if not controlled well, images get transformed into artistic images. Since we do not have any constraints over the perturbation norm, it is necessary to decide when to stop the texture transfer procedure. For a successful attack (images look realistic), we limit our L-BFGS to fixed number of small steps and perform two set of experiments: one with only one iteration or round of L-BFGS for 14 steps and another with three iterations of 14 steps. For the three iterations setup, after every iteration, we look at the confidence of our target class and stop if the confidence is greater than 0.9.
Texture and Cross-Entropy Weights. Empirically, we found setting α to be in the range [150, 1000] and β in the range [ 10−4, 10−3 ] to be successful and also produce less perceptible tAdv examples. The additional cross-entropy based adversarial objective Jadv helps our optimization. We ensure large flow of gradients is from the texture loss and they are sufficiently larger than the adversarial crossentropy objective. The adversarial objective also helps in transforming victim image to adversarial without stylizing the image. All our tabulated results are shown for one iteration, α = 250 and β = 10−3, unless otherwise stated. We use the annotation tAdviterα for the rest of the paper to denote the texture method that we are using.
Control over Texture. The amount of texture that gets added to our victim image is controlled by the texture weight coefficient (α). Increasing texture weights improves attack success rate at the cost of noticeable perturbation. When compared to within-layer statistics, the cross-layer statistics that we use are not only better at extracting texture, it is also easier to control the texture weight.
4 EXPERIMENTAL RESULTS
In this section, we evaluate the two proposed attack methods both quantitatively, via attack success rate under different settings, and qualitatively, based on interesting case studies. We conduct our experiments on ImageNet Deng et al. (2009) by randomly selecting images from 10 sufficiently different classes predicted correctly for the classification attack.
We use a pretrained ResNet 50 classifier (He et al. (2016)) for all our methods. DenseNet 121 and VGG 19 (Huang et al.; Simonyan & Zisserman (2014)) are used for our transferability analysis.
4.1 cADV ATTACK
cAdv achieves high targeted attack success rate by adding realistic color perturbation. Our numbers in Table 1 and Table 2 also reveal that cAdv examples with larger color changes (consequently more color diversity) are more robust against transferability and adversarial defenses. However, these big changes are found to be slightly less realistic from our user study (Table 2, Table 4).
Smooth cAdv perturbations. Fig. 8 in our Appendix shows interesting properties of the adversarial colors. We observe that cAdv perturbations are locally smooth and are relatively low-frequency. This is different from most adversarial attacks that generate high-frequency noise-like perturbations. This phenomenon can be explained by the observation that colors are usually smooth within object boundaries. The pretrained colorization model will thus produce smooth, low-frequency adversarial colors that conform to object boundaries.
Importance of color in classification. From Fig. 2, we can compare how different target class affects our colorization results if we relax our constraints on colors (cAdv on Network Weights, 0 hints). In many cases, the images contain strong colors that are related to the target class. In the case of golf-cart, we get a green tint over the entire image. This can push the target classifier to misclassify the image as green grass is usually overabundant in benign golf-cart images. Fig. 2b shows our attack on an image of a car to tench (a type of fish). We observe that the gray road turned blue and that the colors are tinted. We can hypothesize that the blue colors and the tint fooled the classifier into thinking the image is a tench in the sea.
The colorization model is originally trained to produce natural colorization that conforms to object boundaries. By adjusting its parameters, we are able to produce such large and abnormal color change that is impossible with our attack on hints and mask. These colors, however, show us some evidence that colors play a stronger role in classification than we thought. We reserve the exploration of this observation for future works.
While this effect (strong color correlation to target class) is less pronounced for our attack on hints and mask, for all cAdv methods, we observe isoluminant color blobs. Isoluminant colors are characterized
by a change in color without a corresponding change in luminance. As most color changes occur along edges in natural images, it is likely that classifiers trained on ImageNet have never seen isoluminant colors. This suggests that cAdv might be exploiting isoluminant colors to fool classifiers.
4.2 tADV ATTACK
tAdv successfully fools the classifiers with a very small weighted adversarial cross-entropy objective (β) when combined with texture loss, while remaining realistic to humans. As shown in Table 1, our attacks are highly successful on white-box attacks tested on three different models with the nearest neighbor texture transfer approach. We also show our attacks are more transferable to other models. In our Appendix, we show ablation results for tAdv attacks along with other strategies that we used for generating tAdv adversarial examples.
Structured tAdv Perturbations. Since we extract features across different layers of VGG, the tAdv perturbations follow a textural pattern. They are more structured and organized when compared to others. Our tAdv perturbations are big when compared with existing attack methods in Lp norm. They are of high-frequency and yet imperceptible (see Fig. 1 and Fig. 8).
Importance of Texture in Classification. Textures are crucial descriptors for image classification and Imagenet trained models can be exploited by altering the texture. Their importance is also shown in the recent work from Geirhos et al. (2018). Our results also shows that even with a small or invisible change in the texture field can break the current state of the art classifiers.
4.3 DEFENSE AND TRANSFERABILITY ANALYSIS
We test all our attacks and other existing methods with images attacked from Resnet50. We evaluate them on three defenses – JPEG defense (Das et al., 2017), feature squeezing (Xu et al., 2017) and adversarial training. By leveraging JPEG compression and decompression, adversarial noise may be removed. We tested our methods against JPEG compression of 75. Feature squeezing is a family of simple but surprisingly effective strategies, including reducing color bit depth and spatial smoothing. Adversarial training has been shown as an effective but costly method to defend against adversarial attacks. Mixing adversarial samples into training data of a classifier improves its robustness without affecting the overall accuracy. We were able to obtain an adversarially pretrained Resnet152 model on ImageNet dataset and hence we tested our Resnet50 attacked images with this model.
Robustness. In general, our attacks are more robust to the considered defenses and transferable for targeted attacks. For cAdv, there is a trade-off between more realistic colors (using more hints and sampling from more clusters) and attack robustness. From Table 1 and 2, we show that as we progressively use more clusters, our transferability and defense numbers drop. A similar trend is observed with the change in the number of hints. cAdv is robust to JPEG defense and adversarial training because of their large and spatially smooth perturbations. For tAdv, increasing texture weight (α) does not necessarily perform well with the defense even though it increases attack success rate, but increasing texture flow with more iterations improves attack’s robustness against defenses.
5 HUMAN PERCEPTUAL STUDIES
To quantify how realistic tAdv and cAdv examples are, we conducted a user study on Amazon Mechanical Turk (AMT). We follow the same procedure as described in (Zhang et al., 2016; Xiao et al., 2018b). For each attack, we choose the same 200 adversarial images and their corresponding benign ones. During each trial, one random adversarial-benign pair appears for three seconds and workers are given five minutes to identify the realistic one. Each attack has 600 unique pairs of images and each pair is evaluated by at least 10 unique workers. We restrict biases in this process by allowing each unique user up to 5 rounds of trials and also ignore users who complete the study in less than 30 seconds. In total, 598 unique workers completed at least one round of our user study. For each image, we can then calculate the user preference score as the number of times it is chosen divided by the number of times it is displayed. 0.5 represents that users are unable to distinguish if the image is fake. For cAdv and tAdv, user preferences averages at 0.476 and 0.433 respectively, indicating that workers have a hard time distinguishing them. The user preferences for all attacks are summarized in Table 2 and their comparison with Lp norm is in Table 4 and Table 5.
6 ATTACKING CAPTIONING MODEL
Our methods are general and can be easily adapted for other learning tasks. As proof of concept, we test our attacks against image captioning task. Image captioning is the task of generating a sequence of word description for an image. The popular architecture for captioning is a Long-ShortTerm-Memory (LSTM) (Hochreiter & Schmidhuber, 1997) based models (Karpathy & Fei-Fei, 2015; Wang et al., 2017). Recently, (Aneja et al., 2018) proposed a convolutional based captioning model for a fast and accurate caption generation. This convolutional based approach does not suffer from the commonly known problems of vanishing gradients and overly confident predictions of LSTM network. Therefore, we choose to attack the current state of the art convolutional captioning model. We randomly selected images from MSCOCO (Lin et al., 2014) for image captioning attack.
Attacking captioning models is harder than attacking classifiers when the goal is to change exactly one word in the benign image’s caption unlike pixel based attacks (Chen et al., 2017; Xu et al., 2019). We show that our attacks are successful and have no visible artifacts even for this challenging task. In Fig. 6, we change the second word of the caption to dog while keeping the rest of the caption the same. This is a challenging targeted attack because, in many untargeted attacks, the resulted captions do not make sense. More examples are in our Appendix.
Adversarial Cross-Entropy Objective for Captioning. Let t be the target caption, w denote the word position of the caption, F for the captioning model, Iv for the victim image and Jadv for the cross-entropy loss
LAcapt = ∑ w Jadv((F(Iv))w, tw) (6)
For cAdv, we give all color hints and optimize to get an adversarial colored image to produce target caption. For tAdv, we add Eqn 6 to Eqn 4 to optimize the image. We select TS as the nearest neighbor of the victim image from the ones in the adversarial target class using ImageNet dataset. We stop our attack once we reach the target caption and the caption does not change in consecutive iterations. Note we do not change the network weights, we only optimize hints and mask (for cAdv) or the victim image (for tAdv) to achieve our target caption.
7 RELATED WORK
Here we briefly summarize existing unrestricted and semantic adversarial attacks. Xiao et al. (2018b) proposed geometric or spatial distortion of pixels in image to create adversarial examples. They distort the input image by optimizing pixel flow instead of pixel values to generate adversarial examples. While this attack leads to “natural” looking adversarial examples with large L∞ norm, it does not take image semantics into account. Song et al (2018) and Dunn et al. (2019) considered GANs for adversarial attacks. This attack is unrestricted in Lp norm but they are restricted to simple datasets as it involves training GANs, which have been known to be unstable and computationally intensive for complex datasets like ImageNet (Karras et al., 2017; Brock et al., 2018).
Hosseini & Poovendran (2018), changes the hue & saturation of an image randomly to create adversarial examples. It is similar to cAdv as they both involve changing colors, however, their search space is limited to two dimensions and their images are unrealistic, Appendix (Fig. 10). Also, while this method has a non-trivial untargeted attack success rate, it performs extremely poorly for targeted attacks (1.20% success rate in our own experiments on ImageNet). Our work is also related to Joshi et al. (2019) and Qiu et al. (2019), who manipulate images conditioned on face dataset attributes like glasses, beard for their attacks. These work focuses on changing single image visual attribute and are conditionally dependent. Our work focuses on changing visual semantic descriptors to misclassify images and are not conditioned to any semantic attributes.
8 CONCLUSION
Our proposed two novel unrestricted semantic attacks shed light on the role of texture and color fields in influencing DNN’s predictions. They not only consistently fool human subjects but in general are harder to defend against. We hope by presenting our methods, we encourage future studies on unbounded adversarial attacks, better metrics for measuring perturbations, and more sophisticated defenses.
ACKNOWLEDGEMENTS
We thank Chaowei Xiao for sharing their code to compare our methods with Xiao et al. (2018b) and helping us setup the user study. We also thank Tianyuan Zhang for providing the AdvRes152 pretrained model. This work was supported by NSF Grant No. 1718221 and ONR MURI Award N00014-16-1-2007.
A APPENDIX
A.1 OTHER DETAILS ON HUMAN STUDY
We also chose BIM (Kurakin et al., 2016) and CW (Carlini & Wagner, 2017) for comparing our perturbations. Since these attacks are known to have low Lp norm, we designed an aggressive version of BIM by relaxing its L∞ bound to match the norm of our attacks. We settled with two aggressive versions of BIM with average L∞ = {0.21, 0.347}, which we refer to as BIM0.21, BIM0.34. The average user preferences for BIM drops drastically from 0.497 to 0.332 when we relax the norm to BIM0.34; the decrease in user preferences for tAdv (0.433 to 0.406) and cAdv (0.476 to 0.437) is not significant. In Fig. 7, we plot a density plot of L∞ vs user preference scores.
A.2 ADDITIONAL RESULTS
Model Resnet50 Dense121 VGG 19 Accuracy 76.15 74.65 74.24
A tta ck Su cc es
s Random Ts 99.67 99.72 96.16 Random Target Ts 99.72 99.89 99.94 Nearest Target Ts 97.99 99.72 99.50 cAdv4 25 hints 99.78 99.83 99.93 cAdv4 50 hints 99.78 99.83 100.00 cAdv4 100 hints 99.44 99.50 99.93
Whitebox target attack success rate. Our attacks are highly successful on different models across all strategies. tAdv results are for α = 250, β = 10−3
and iter= 1.
β α 250 500 750 1000
0 25.00 99.61 98.55 95.92 10−4 99.88 99.61 98.55 95.92 10−3 97.99 99.27 99.66 99.50 10−2 96.26 95.42 96.32 96.59
tAdv ablation study. Whitebox target success rate with nearest target Ts (texture source). In columns, we have increasing texture weight (α) and in rows, we have increasing adversarial cross-entropy weight (β). All attacks
are done on Resnet50.
Table 6: Ablation Studies.
GT k=1 k=2 k=4 k=8k=6
Figure 9: Additional qualitative examples for controlling cAdv. We show a comparison of sampling 50 color hints from k clusters with low-entropy. All images are attacked to golf-cart. Even numbered rows visualize our cluster segments, with darker colors representing higher mean entropy and red dots representing the location we sample hints from. Sampling hints across more clusters gives less color variety. | 1. How do the proposed cAdv and sAdv methods manipulate color and texture to create adversarial examples?
2. What are the optimization objectives used in the paper for finding adversarial examples with respect to each semantic technique?
3. Can you explain the significance of the user study performed in the paper and what it shows about the generated examples?
4. How do the proposed methods compare to existing defense methods in terms of robustness and transferability across models?
5. What are some potential limitations or areas for improvement in the proposed methods, such as the effectiveness of texture addition or the choice of notation used in Equation 1? | Review | Review
The paper proposes cAdv and sAdv, two new unrestricted adversarial attack methods that manipulates either color or texture of an image. To these end, the paper employes another parametrized colorization techniques (and texture transfer method) and proposes optimization objectives for finding adversarial examples with respect to each semantic technique. Experimental results show that the proposed methods are more robust on existing defense methods and more transferrable accross models. The paper also performs a user study to show that the generated examples are fairly imperceptible like the C&W attack.
In overall, I agree that seeking a new way of attack is important, and the methods are clearly presented to claim a new message to the community: adversarial examples can be even found by exploiting semantic features that humans also utilize, since DNNs tend to overly-utilize them, e.g. colors. These claims are supported by the experiments showing that the generated examples are more transferrable across robust classifiers. Personally, I liked the idea of using another colorization method to design cAdv and the use of K-means clustering to control the imperceptibility.
- Some readers may wonder how the "averaged-case" corruption robustness behave for both cAdv and sAdv, e.g. considering random colorization. Would it be worse than the robustness on Gaussian noise?
- One of my concerns on tAdv is whether the texture added is indeed effective to reduce the accuracy, or its just from the (yet small) beta term in the objective. Adding an ablation of beta=0 case in the result would much help the understanding of the method.
- Eq 1: I think F should denote the classifier to attack, but the description tells it's the colorization network. As it seems to me that theta is nevertheless for the colorization network, I feel the notation should be refined for better understanding to the readers. |
ICLR | Title
Unrestricted Adversarial Examples via Semantic Manipulation
Abstract
Machine learning models, especially deep neural networks (DNNs), have been shown to be vulnerable against adversarial examples which are carefully crafted samples with a small magnitude of the perturbation. Such adversarial perturbations are usually restricted by bounding their Lp norm such that they are imperceptible, and thus many current defenses can exploit this property to reduce their adversarial impact. In this paper, we instead introduce “unrestricted” perturbations that manipulate semantically meaningful image-based visual descriptors – color and texture – in order to generate effective and photorealistic adversarial examples. We show that these semantically aware perturbations are effective against JPEG compression, feature squeezing and adversarially trained model. We also show that the proposed methods can effectively be applied to both image classification and image captioning tasks on complex datasets such as ImageNet and MSCOCO. In addition, we conduct comprehensive user studies to show that our generated semantic adversarial examples are photorealistic to humans despite large magnitude perturbations when compared to other attacks.
N/A
Machine learning models, especially deep neural networks (DNNs), have been shown to be vulnerable against adversarial examples which are carefully crafted samples with a small magnitude of the perturbation. Such adversarial perturbations are usually restricted by bounding their Lp norm such that they are imperceptible, and thus many current defenses can exploit this property to reduce their adversarial impact. In this paper, we instead introduce “unrestricted” perturbations that manipulate semantically meaningful image-based visual descriptors – color and texture – in order to generate effective and photorealistic adversarial examples. We show that these semantically aware perturbations are effective against JPEG compression, feature squeezing and adversarially trained model. We also show that the proposed methods can effectively be applied to both image classification and image captioning tasks on complex datasets such as ImageNet and MSCOCO. In addition, we conduct comprehensive user studies to show that our generated semantic adversarial examples are photorealistic to humans despite large magnitude perturbations when compared to other attacks.
1 INTRODUCTION
Machine learning (ML), especially deep neural networks (DNNs) have achieved great success in various tasks, including image recognition (Krizhevsky et al., 2012; He et al., 2016), speech processing (Hinton et al., 2012) and robotics training (Levine et al., 2016). However, recent literature has shown that these widely deployed ML models are vulnerable to adversarial examples – carefully crafted perturbations aiming to mislead learning models (Carlini & Wagner, 2017; Kurakin et al., 2016; Xiao et al., 2018b). The fast growth of DNNs based solutions demands in-depth studies on adversarial examples to help better understand potential vulnerabilities of ML models and therefore improve their robustness.
To date, a variety of different approaches has been proposed to generate adversarial examples (Goodfellow et al., 2014b; Carlini & Wagner, 2017; Kurakin et al., 2016; Xiao et al., 2018a); and many of these attacks search for perturbation within a bounded Lp norm in order to preserve their photorealism. However, it is known that the Lp norm distance as a perceptual similarity metric is not ideal (Johnson et al., 2016; Isola et al., 2017). In addition, recent work show that defenses trained on Lp bounded perturbation are not robust at all against new types of unseen attacks (Kang et al., 2019). Therefore, exploring diverse adversarial examples, especially those with “unrestricted" magnitude of perturbation has acquired a lot of attention in both academia and industries (Brown et al., 2018).
Recent work based on generative adversarial networks (GANs) (Goodfellow et al., 2014a) have introduced unrestricted attacks (Song et al, 2018). However, these attacks are limited to datasets like MNIST, CIFAR and CelebA, and are usually unable to scale up to bigger and more complex datasets such as ImageNet. Xiao et al. (2018b) directly manipulated spatial pixel flow of an image to produce adversarial examples without Lp bounded constraints on the perturbation. However, the attack does not explicitly control visual semantic representation. More recently, Hosseini & Poovendran (2018) manipulated hue and saturation of an image to create adversarial perturbations. However, these examples are easily distinguishable by human and are also not scalable to complex datasets.
∗ indicates equal contributions.
In this work, we propose unrestricted attack strategies that explicitly manipulate semantic visual representations to generate natural-looking adversarial examples that are “far” from the original image in tems of the Lp norm distance. In particular, we manipulate color (cAdv) and texture (tAdv) to create realistic adversarial examples (see Fig 1). cAdv adaptively chooses locations in an image to change their colors, producing adversarial perturbation that is usually fairly substantial, while tAdv utilizes the texture from other images and adjusts the instance’s texture field using style transfer.
These semantic transformation-based adversarial perturbations shed light upon the understanding of what information is important for DNNs to make predictions. For instance, in one of our case studies, when the road is recolored from gray to blue, the image gets misclassified to tench (a fish) although a car remains evidently visible (Fig. 2b). This indicates that deep learning models can easily be fooled by certain large scale patterns. In addition to image classifiers, the proposed attack methods can be generalized to different machine learning tasks such as image captioning ( Karpathy & Fei-Fei (2015)). Our attacks can either change the entire caption to the target (Chen et al., 2017; Xu et al., 2019) or take on more challenging tasks like changing one or two specific target words from the caption to a target. For example, in Fig. 1, “stop sign” of the original image caption is changed to “cat sitting” and “umbrella is” for cAdv and tAdv respectively.
To ensure our “unrestricted" semantically manipulated images are natural, we conducted extensive user studies with Amazon Mechanical Turk. We also tested our proposed attacks on several state of the art defenses. Rather than just showing the attacks break these defenses (better defenses will come up), we aim to show that cAdv and tAdv are able to produce new types of adversarial examples. Experiments also show that our proposed attacks are more transferable given their large and structured perturbations (Papernot et al., 2016). Our semantic adversarial attacks provide further insights about the vulnerabilities of ML models and therefore encourage new solutions to improve their robustness.
In summary, our contributions are: 1) We propose two novel approaches to generate “unrestricted" adversarial examples via semantic transformation; 2) We conduct extensive experiments to attack both image classification and image captioning models on large scale datasets (ImageNet and MSCOCO); 3) We show that our attacks are equipped with unique properties such as smooth cAdv perturbations and structured tAdv perturbations. 4) We perform comprehensive user studies to show that when compared to other attacks, our generated adversarial examples appear more natural to humans despite their large perturbations; 5) We test different adversarial examples against several state of the art defenses and show that the proposed attacks are more transferable and harder to defend.
2 COLORIZATION ATTACK (cADV)
Background. Image Colorization is the task of giving natural colorization to a grayscale image. This is an ill-posed problem as there are multiple viable natural colorizations given a single grayscale
image. Deshpande et al. (2017) showed that diverse image colorization can be achieved by using an architecture that combines VAE (Kingma & Welling (2013)) and Mixture Density Network; while Zhang et al. (2017) demonstrated an improved and diverse image colorization by using input hints from users guided colorization process.
Our goal is to adversarially color an image by leveraging a pretrained colorization model. We hypothesize that it is possible to find a natural colorization that is adversarial for a target model (e.g., classifier or captioner) by searching in the color space. Since a colorization network learns to color natural colors that conform to boundaries and respect short-range color consistency, we can use it to introduce smooth and consistent adversarial noise with a large magnitude that looks natural to humans. This attack differs from common adversarial attacks which tend to introduce short-scale high-frequency artifacts that are minimized to be invisible for human observers.
We leverage Zhang et al. (2016; 2017) colorization model for our attack. In their work, they produce natural colorizations on ImageNet with input hints from the user. The inputs to their network consist of the L channel of the image in CIELAB color space XL ∈ RH×W×1, the sparse colored input hints Xab ∈ RH×W×2, and the binary mask M ∈ BH×W×1, indicating the location of the hints. cAdv Objectives. There are a few ways to leverage the colorization model to achieve adversarial objectives. We experimented with two main methods and achieved varied results.
Network weights. The straightforward approach of producing adversarial colors is to modify Zhang et al. (2017) colorization network, C directly. To do so, we simply update C by minimizing the adversarial loss objective Jadv, which in our case, is the cross entropy loss. t represents target class and F represents the victim network.
θ∗ = argmin θ Jadv(F(C(XL, Xab,M ; θ)), t) (1)
Hints and mask. We can also vary input hints Xab and mask M to produce adversarial colorizations. Hints provides the network with ground truth color patches that guides the colorization, while the mask provides its spatial location. By jointly varying both hints and mask, we are able to manipulate the output colorization. We can update the hints and mask as follows:
M∗, X∗ab = argmin M,Xab Jadv(F(C(XL, Xab,M ; θ)), t) (2)
cAdvAttack Methods. Attacking network weights allows the network to search the color space with no constraints for adversarial colors. This attack is the easiest to optimize, but the output colors are not realistic as shown in Fig. 2. Our various strategies outlined below are ineffective as the model learns to generate the adversarial colors without taking into account color realism. However, colorizations produced often correlate with colors often observed in the target class. This suggests that classifiers associate certain colors with certain classes which we will discuss more in our case study.
Attacking input hints and mask jointly gives us natural results as the pretrained network will not be affected by our optimization. Attacking hints and mask separately also works but takes a long optimization time and give slightly worse results. For our experiments, we use Adam Optimizer (Kingma & Ba (2014)) with a learning rate of 10−4 in cAdv. We iteratively update hints and mask until our adversarial image reaches the target class and the confidence change of consecutive iterations does not exceed a threshold of 0.05.
Control over colorization. Current attack methods lack control over where the attack occurs, opting to attack all pixels indiscriminately. This lack of control is not important for most attacks where the is small but is concerning in cAdv where making unstructured large changes can be jarring. To produce realistic colorization, we need to avoid making large color changes at locations where colors are unambiguous (e.g. roads in general are gray) and focus on those where colors are ambiguous (e.g. an umbrella can have different colors). To do so, we need to segment an image and determine which segments should be attacked or preserved.
To segment the image into meaningful areas, we cluster the image’s ground truth AB space using K-Means. We first use a Gaussian filter of σ = 3 to smooth the AB channels and then cluster them into 8 clusters. Then, we have to determine which cluster’s colors should be preserved. Fortunately, Zhang et al. (2017) network output a per-pixel color distribution for a given image which we used to calculate the entropy of each pixel. The entropy represents how confident the network is at assigning a color at that location. The average entropy of each cluster represents how ambiguous their color is. We want to avoid making large changes to clusters with low-entropy while allowing our attack to change clusters with high entropy. One way to enforce this behavior is through hints, which are sampled from the ground truth at locations belonging to clusters of low-entropy. We sample hints from the k clusters with the lowest entropy which we refer as cAdvk (e.g. cAdv2 samples hints from the 2 lowest entropy clusters).
Number of input hints. Network hints constrain our output to have similar colors as the ground truth, avoiding the possibility of unnatural colorization at the cost of color diversity. This trade-off is controlled by the number of hints given to the network as initialization (Fig. 4). Generally, providing more hints gives us similar colors that are observed in original image. However, having too many hints is also problematic. Too many hints makes the optimization between drawing adversarial colors and matching local color hints difficult. Since the search space for adversarial colors is constrained because of more hints, we may instead generate unrealistic examples.
Number of Clusters. The trade-off between the color diversity and the color realism is also controlled by the number of clusters we sample hints from as shown in Fig. 3. Sampling from multiple clusters gives us realistic colors closer to the ground truth image at the expense of color diversity.
Empirically, from our experiments we find that in terms of color diversity, realism, and robustness of attacks, using k = 4 and 50 hints gives us better adversarial examples. For the rest of this paper, we fix 50 hints for all cAdvk methods.
3 TEXTURE ATTACK (tADV)
Background. Texture transfer extracts texture from one image and adds it to another. Transferring texture from one source image to another target image has been widely studied in computer vision ( Efros & Freeman (2001); Gatys et al. (2015)). The Convolutional Neural Network (CNN) based texture transfer from Gatys et al. (2015) led to a series of new ideas in the domain of artistic style transfer ( Gatys et al. (2016); Huang & Belongie (2017); Li et al. (2017); Yeh et al. (2019)). More recently, Geirhos et al. (2018) showed that DNNs trained on ImageNet are biased towards texture for making predictions.
Our goal is to generate adversarial examples by infusing texture from another image without explicit constraints on Lp norm of the perturbation. For generating our tAdv examples, we used a pretrained VGG19 network (Simonyan & Zisserman, 2014) to extract textural features. We directly optimize our victim image (Iv) by adding texture from a target image (It). A natural strategy to transfer texture is by minimizing within-layer feature correlation statistics (gram matrices) between two images Gatys et al. (2015; 2016). Based on Yeh et al. (2019), we find that optimizing cross-layer gram matrices instead of within-layer gram matrices helps produce more natural looking adversarial examples. The difference between the within-layer and the cross-layer gram matrices is that for a within-layer, the feature’s statistics are computed between the same layer. For a cross-layer, the statistics are computed between two adjacent layers.
tAdv Objectives. tAdv directly attacks the image to create adversarial examples without modifying network parameters. Moreover, there is no additional content loss that is used in style transfer methods (Gatys et al. (2016); Yeh et al. (2019)). Our overall objective function for the texture attack contains a texture transfer loss (LAt ) and an cross-entropy loss (Jadv).
LAtAdv = αL A t (Iv, It) + βJadv(F(Iv), t) (3)
Unlike style transfer methods, we do not want the adversarial examples to be artistically pleasing. Our goal is to infuse a reasonable texture from a target class image to the victim image and fool a classifier or captioning network. To ensure a reasonable texture is added without overly perturbing the victim image too much, we introduce an additional constraint on the variation in the gram matrices of the victim image. This constraint helps us to control the image transformation procedure and prevents it from producing artistic images. Let m and n denote two layers of a pretrained VGG-19 with a decreasing spatial resolution and C for number of filter maps in layer n, our texture transfer loss is then given by
LAt (Iv, It) = ∑
(m,n)∈L
1
C2 ∑ ij ∥∥Gm,nij (Iv)−Gm,nij (It)∥∥2 std ( Gm,nij (Iv) ) (4)
Let f be feature maps, Ufn be an upsampled fn that matches the spatial resolution of layer m. The cross layer gram matrices G between the victim image (Iv) and a target image (It) is given as
Gm,nij (I) = ∑ p [ fmi,p(I) ] [ Ufnj,p(I) ]T (5) Texture Transfer. To create tAdv adversarial examples, we need to find images to extract the texture from, which we call “texture source” (Ts). A naive strategy is to randomly select an image from the data bank as Ts. Though this strategy is successful, their perturbations are clearly perceptible. Alternatively, we can randomly select Ts from the adversarial target class. This strategy produces less perceptible perturbations compared to the random Ts method as we are extracting a texture from the known target class. A better strategy to select Ts is to find a target class image that lies closest to the victim image in the feature space using nearest neighbors. This strategy is sensible as we assure our victim image has similar feature statistics as our target image. Consequently, minimizing gram matrices is easier and our attack generates more natural looking images (see Fig. 5).
For texture transfer, we extract cross-layer statistics in Eq. 4 from the R11, R21, R31, R41, and R51 of a pretrained VGG19. We optimize our objective (Eq. 3) using an L-BFGS (Liu & Nocedal (1989)) optimizer. tAdv attacks are sensitive and if not controlled well, images get transformed into artistic images. Since we do not have any constraints over the perturbation norm, it is necessary to decide when to stop the texture transfer procedure. For a successful attack (images look realistic), we limit our L-BFGS to fixed number of small steps and perform two set of experiments: one with only one iteration or round of L-BFGS for 14 steps and another with three iterations of 14 steps. For the three iterations setup, after every iteration, we look at the confidence of our target class and stop if the confidence is greater than 0.9.
Texture and Cross-Entropy Weights. Empirically, we found setting α to be in the range [150, 1000] and β in the range [ 10−4, 10−3 ] to be successful and also produce less perceptible tAdv examples. The additional cross-entropy based adversarial objective Jadv helps our optimization. We ensure large flow of gradients is from the texture loss and they are sufficiently larger than the adversarial crossentropy objective. The adversarial objective also helps in transforming victim image to adversarial without stylizing the image. All our tabulated results are shown for one iteration, α = 250 and β = 10−3, unless otherwise stated. We use the annotation tAdviterα for the rest of the paper to denote the texture method that we are using.
Control over Texture. The amount of texture that gets added to our victim image is controlled by the texture weight coefficient (α). Increasing texture weights improves attack success rate at the cost of noticeable perturbation. When compared to within-layer statistics, the cross-layer statistics that we use are not only better at extracting texture, it is also easier to control the texture weight.
4 EXPERIMENTAL RESULTS
In this section, we evaluate the two proposed attack methods both quantitatively, via attack success rate under different settings, and qualitatively, based on interesting case studies. We conduct our experiments on ImageNet Deng et al. (2009) by randomly selecting images from 10 sufficiently different classes predicted correctly for the classification attack.
We use a pretrained ResNet 50 classifier (He et al. (2016)) for all our methods. DenseNet 121 and VGG 19 (Huang et al.; Simonyan & Zisserman (2014)) are used for our transferability analysis.
4.1 cADV ATTACK
cAdv achieves high targeted attack success rate by adding realistic color perturbation. Our numbers in Table 1 and Table 2 also reveal that cAdv examples with larger color changes (consequently more color diversity) are more robust against transferability and adversarial defenses. However, these big changes are found to be slightly less realistic from our user study (Table 2, Table 4).
Smooth cAdv perturbations. Fig. 8 in our Appendix shows interesting properties of the adversarial colors. We observe that cAdv perturbations are locally smooth and are relatively low-frequency. This is different from most adversarial attacks that generate high-frequency noise-like perturbations. This phenomenon can be explained by the observation that colors are usually smooth within object boundaries. The pretrained colorization model will thus produce smooth, low-frequency adversarial colors that conform to object boundaries.
Importance of color in classification. From Fig. 2, we can compare how different target class affects our colorization results if we relax our constraints on colors (cAdv on Network Weights, 0 hints). In many cases, the images contain strong colors that are related to the target class. In the case of golf-cart, we get a green tint over the entire image. This can push the target classifier to misclassify the image as green grass is usually overabundant in benign golf-cart images. Fig. 2b shows our attack on an image of a car to tench (a type of fish). We observe that the gray road turned blue and that the colors are tinted. We can hypothesize that the blue colors and the tint fooled the classifier into thinking the image is a tench in the sea.
The colorization model is originally trained to produce natural colorization that conforms to object boundaries. By adjusting its parameters, we are able to produce such large and abnormal color change that is impossible with our attack on hints and mask. These colors, however, show us some evidence that colors play a stronger role in classification than we thought. We reserve the exploration of this observation for future works.
While this effect (strong color correlation to target class) is less pronounced for our attack on hints and mask, for all cAdv methods, we observe isoluminant color blobs. Isoluminant colors are characterized
by a change in color without a corresponding change in luminance. As most color changes occur along edges in natural images, it is likely that classifiers trained on ImageNet have never seen isoluminant colors. This suggests that cAdv might be exploiting isoluminant colors to fool classifiers.
4.2 tADV ATTACK
tAdv successfully fools the classifiers with a very small weighted adversarial cross-entropy objective (β) when combined with texture loss, while remaining realistic to humans. As shown in Table 1, our attacks are highly successful on white-box attacks tested on three different models with the nearest neighbor texture transfer approach. We also show our attacks are more transferable to other models. In our Appendix, we show ablation results for tAdv attacks along with other strategies that we used for generating tAdv adversarial examples.
Structured tAdv Perturbations. Since we extract features across different layers of VGG, the tAdv perturbations follow a textural pattern. They are more structured and organized when compared to others. Our tAdv perturbations are big when compared with existing attack methods in Lp norm. They are of high-frequency and yet imperceptible (see Fig. 1 and Fig. 8).
Importance of Texture in Classification. Textures are crucial descriptors for image classification and Imagenet trained models can be exploited by altering the texture. Their importance is also shown in the recent work from Geirhos et al. (2018). Our results also shows that even with a small or invisible change in the texture field can break the current state of the art classifiers.
4.3 DEFENSE AND TRANSFERABILITY ANALYSIS
We test all our attacks and other existing methods with images attacked from Resnet50. We evaluate them on three defenses – JPEG defense (Das et al., 2017), feature squeezing (Xu et al., 2017) and adversarial training. By leveraging JPEG compression and decompression, adversarial noise may be removed. We tested our methods against JPEG compression of 75. Feature squeezing is a family of simple but surprisingly effective strategies, including reducing color bit depth and spatial smoothing. Adversarial training has been shown as an effective but costly method to defend against adversarial attacks. Mixing adversarial samples into training data of a classifier improves its robustness without affecting the overall accuracy. We were able to obtain an adversarially pretrained Resnet152 model on ImageNet dataset and hence we tested our Resnet50 attacked images with this model.
Robustness. In general, our attacks are more robust to the considered defenses and transferable for targeted attacks. For cAdv, there is a trade-off between more realistic colors (using more hints and sampling from more clusters) and attack robustness. From Table 1 and 2, we show that as we progressively use more clusters, our transferability and defense numbers drop. A similar trend is observed with the change in the number of hints. cAdv is robust to JPEG defense and adversarial training because of their large and spatially smooth perturbations. For tAdv, increasing texture weight (α) does not necessarily perform well with the defense even though it increases attack success rate, but increasing texture flow with more iterations improves attack’s robustness against defenses.
5 HUMAN PERCEPTUAL STUDIES
To quantify how realistic tAdv and cAdv examples are, we conducted a user study on Amazon Mechanical Turk (AMT). We follow the same procedure as described in (Zhang et al., 2016; Xiao et al., 2018b). For each attack, we choose the same 200 adversarial images and their corresponding benign ones. During each trial, one random adversarial-benign pair appears for three seconds and workers are given five minutes to identify the realistic one. Each attack has 600 unique pairs of images and each pair is evaluated by at least 10 unique workers. We restrict biases in this process by allowing each unique user up to 5 rounds of trials and also ignore users who complete the study in less than 30 seconds. In total, 598 unique workers completed at least one round of our user study. For each image, we can then calculate the user preference score as the number of times it is chosen divided by the number of times it is displayed. 0.5 represents that users are unable to distinguish if the image is fake. For cAdv and tAdv, user preferences averages at 0.476 and 0.433 respectively, indicating that workers have a hard time distinguishing them. The user preferences for all attacks are summarized in Table 2 and their comparison with Lp norm is in Table 4 and Table 5.
6 ATTACKING CAPTIONING MODEL
Our methods are general and can be easily adapted for other learning tasks. As proof of concept, we test our attacks against image captioning task. Image captioning is the task of generating a sequence of word description for an image. The popular architecture for captioning is a Long-ShortTerm-Memory (LSTM) (Hochreiter & Schmidhuber, 1997) based models (Karpathy & Fei-Fei, 2015; Wang et al., 2017). Recently, (Aneja et al., 2018) proposed a convolutional based captioning model for a fast and accurate caption generation. This convolutional based approach does not suffer from the commonly known problems of vanishing gradients and overly confident predictions of LSTM network. Therefore, we choose to attack the current state of the art convolutional captioning model. We randomly selected images from MSCOCO (Lin et al., 2014) for image captioning attack.
Attacking captioning models is harder than attacking classifiers when the goal is to change exactly one word in the benign image’s caption unlike pixel based attacks (Chen et al., 2017; Xu et al., 2019). We show that our attacks are successful and have no visible artifacts even for this challenging task. In Fig. 6, we change the second word of the caption to dog while keeping the rest of the caption the same. This is a challenging targeted attack because, in many untargeted attacks, the resulted captions do not make sense. More examples are in our Appendix.
Adversarial Cross-Entropy Objective for Captioning. Let t be the target caption, w denote the word position of the caption, F for the captioning model, Iv for the victim image and Jadv for the cross-entropy loss
LAcapt = ∑ w Jadv((F(Iv))w, tw) (6)
For cAdv, we give all color hints and optimize to get an adversarial colored image to produce target caption. For tAdv, we add Eqn 6 to Eqn 4 to optimize the image. We select TS as the nearest neighbor of the victim image from the ones in the adversarial target class using ImageNet dataset. We stop our attack once we reach the target caption and the caption does not change in consecutive iterations. Note we do not change the network weights, we only optimize hints and mask (for cAdv) or the victim image (for tAdv) to achieve our target caption.
7 RELATED WORK
Here we briefly summarize existing unrestricted and semantic adversarial attacks. Xiao et al. (2018b) proposed geometric or spatial distortion of pixels in image to create adversarial examples. They distort the input image by optimizing pixel flow instead of pixel values to generate adversarial examples. While this attack leads to “natural” looking adversarial examples with large L∞ norm, it does not take image semantics into account. Song et al (2018) and Dunn et al. (2019) considered GANs for adversarial attacks. This attack is unrestricted in Lp norm but they are restricted to simple datasets as it involves training GANs, which have been known to be unstable and computationally intensive for complex datasets like ImageNet (Karras et al., 2017; Brock et al., 2018).
Hosseini & Poovendran (2018), changes the hue & saturation of an image randomly to create adversarial examples. It is similar to cAdv as they both involve changing colors, however, their search space is limited to two dimensions and their images are unrealistic, Appendix (Fig. 10). Also, while this method has a non-trivial untargeted attack success rate, it performs extremely poorly for targeted attacks (1.20% success rate in our own experiments on ImageNet). Our work is also related to Joshi et al. (2019) and Qiu et al. (2019), who manipulate images conditioned on face dataset attributes like glasses, beard for their attacks. These work focuses on changing single image visual attribute and are conditionally dependent. Our work focuses on changing visual semantic descriptors to misclassify images and are not conditioned to any semantic attributes.
8 CONCLUSION
Our proposed two novel unrestricted semantic attacks shed light on the role of texture and color fields in influencing DNN’s predictions. They not only consistently fool human subjects but in general are harder to defend against. We hope by presenting our methods, we encourage future studies on unbounded adversarial attacks, better metrics for measuring perturbations, and more sophisticated defenses.
ACKNOWLEDGEMENTS
We thank Chaowei Xiao for sharing their code to compare our methods with Xiao et al. (2018b) and helping us setup the user study. We also thank Tianyuan Zhang for providing the AdvRes152 pretrained model. This work was supported by NSF Grant No. 1718221 and ONR MURI Award N00014-16-1-2007.
A APPENDIX
A.1 OTHER DETAILS ON HUMAN STUDY
We also chose BIM (Kurakin et al., 2016) and CW (Carlini & Wagner, 2017) for comparing our perturbations. Since these attacks are known to have low Lp norm, we designed an aggressive version of BIM by relaxing its L∞ bound to match the norm of our attacks. We settled with two aggressive versions of BIM with average L∞ = {0.21, 0.347}, which we refer to as BIM0.21, BIM0.34. The average user preferences for BIM drops drastically from 0.497 to 0.332 when we relax the norm to BIM0.34; the decrease in user preferences for tAdv (0.433 to 0.406) and cAdv (0.476 to 0.437) is not significant. In Fig. 7, we plot a density plot of L∞ vs user preference scores.
A.2 ADDITIONAL RESULTS
Model Resnet50 Dense121 VGG 19 Accuracy 76.15 74.65 74.24
A tta ck Su cc es
s Random Ts 99.67 99.72 96.16 Random Target Ts 99.72 99.89 99.94 Nearest Target Ts 97.99 99.72 99.50 cAdv4 25 hints 99.78 99.83 99.93 cAdv4 50 hints 99.78 99.83 100.00 cAdv4 100 hints 99.44 99.50 99.93
Whitebox target attack success rate. Our attacks are highly successful on different models across all strategies. tAdv results are for α = 250, β = 10−3
and iter= 1.
β α 250 500 750 1000
0 25.00 99.61 98.55 95.92 10−4 99.88 99.61 98.55 95.92 10−3 97.99 99.27 99.66 99.50 10−2 96.26 95.42 96.32 96.59
tAdv ablation study. Whitebox target success rate with nearest target Ts (texture source). In columns, we have increasing texture weight (α) and in rows, we have increasing adversarial cross-entropy weight (β). All attacks
are done on Resnet50.
Table 6: Ablation Studies.
GT k=1 k=2 k=4 k=8k=6
Figure 9: Additional qualitative examples for controlling cAdv. We show a comparison of sampling 50 color hints from k clusters with low-entropy. All images are attacked to golf-cart. Even numbered rows visualize our cluster segments, with darker colors representing higher mean entropy and red dots representing the location we sample hints from. Sampling hints across more clusters gives less color variety. | 1. What is the novel approach proposed by the paper in generating adversarial examples?
2. What are the strengths of the paper, particularly in its clarity and organization?
3. What are the weaknesses of the paper, especially regarding the choice of discriminator?
4. How would using a stronger discriminator, such as a ResNet 50 trained on augmented datasets with color jittering, affect the results?
5. How would incorporating additional information, such as taking the color channel as an explicit input, improve the performance of the discriminator?
6. How would applying a Gaussian filter on the images before feeding them into the discriminator impact the effectiveness of the attack? | Review | Review
This paper proposed to generate semantically meaningful adversarial examples in terms of color of texture. In order to make manipulated images photo-realistic, colors to be replaced are chosen by energy values, while textures are replaced with style-transfer technique.
The paper is written clearly and organized well to understand. The graphs and equations are properly shown. The idea of using color replacement and texture transfer is interesting and novel.
A somewhat weakness is that the discriminator - a pretrained ResNet 50 - is too weak for this scenario. What about a ResNet 50 trained on augmented datasets with color jittering?
What about finetuned ResNet 50 with taking color channel as explicit input, since the attack uses this additional info.
As tAdv attack seems to manipulate high frequency texture of images, how about applying a Gaussian filter on the images and feed into the discrimimator again? Is that attack still effective or not? |